Name : Dario Prawara Teh Wei Rong | Class : DAAA / FT / 2B / 04 | Admission Number : 2201858
Objective : Implement an image classifier using a convolutional neural network to classify images of 15 types of vegetables, such as Bean, Bitter Gourd, Bottle Gourd, Brinjal and more.
Build two types of neural networks, one for each input size.
Background Information : A dataset of images 224 by 224 pixels (px) is given, containing 15 types of images. Images should be converted to grayscale, using only 31 by 31 pixels or 128 by 128 pixels.
Current research in vegetable image classification leverages large, diverse datasets and advanced deep learning techniques to achieve robust and accurate classification.
Studies have employed hybrid deep learning frameworks, custom-designed convolutional neural networks (CNNs), and Vision Models like GoogLeNet and improved YOLOv4 are highlighted for their ideal performance in classifying fruits and vegetables, including detailed categorizations such as fresh or rotten.
The classification systems developed are crucial for applications across the fresh supply chain, supermarkets, and other related fields, indicating the practical implications of these technological advancements and this field has been rapidly growing in recent years.
Now, there seems to be a focus on not only accuracy but also on computational efficiency, with models designed to be resource-effective while maintaining high performance. Transfer learning and advanced architectures are pivotal in enhancing the capabilities of these systems.
Import necessary libraries for pre-processing, data exploration, feature engineering and model evaluation.
Some libraries used include tensorflow, numpy, pandas, matplotlib, seaborn, sklearn and keras.
# Import the necessary modules and libraries
# TensorFlow for deep learning framework
import tensorflow as tf
from tensorflow.keras.preprocessing import image
from tensorflow.keras.preprocessing.image import ImageDataGenerator, load_img, img_to_array
# Visualization and Processing Libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import math
import random
from PIL import Image
import os
import cv2 as cv
import copy
from imblearn.over_sampling import RandomOverSampler
# Scikit-learn's classification_report for evaluating classification models
from sklearn.metrics import classification_report
# Keras Libraries
from keras.regularizers import l1, l2, l1_l2
from keras.layers import (AveragePooling2D, ZeroPadding2D, BatchNormalization, Activation, MaxPool2D, Add, ReLU)
from keras.utils import to_categorical
from keras.metrics import Precision, Recall, CategoricalAccuracy
from keras.models import Sequential, Model, load_model
from keras.layers import Dropout, Flatten, Conv2D, MaxPooling2D, Normalization, Dense, GlobalAveragePooling2D
from keras import Input
from keras.optimizers import *
from keras.callbacks import EarlyStopping
from tensorflow.keras.optimizers.schedules import CosineDecay
import keras_tuner as kt
# Ignore warnings
import warnings
warnings.filterwarnings('ignore')
# Avoid OOM errors by setting GPU Memory Consumption Growth
gpus = tf.config.experimental.list_physical_devices('GPU')
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
# Show the GPU
gpu
PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')
tf.random.set_seed(88)
# Define utility functions for analysis later
# Function to load images in original size
def load_images(folder_dir):
image_data_generator = ImageDataGenerator()
loaded_images = image_data_generator.flow_from_directory(
folder_dir,
target_size=(224, 224),
color_mode='rgb', # Use 'grayscale' for the images
batch_size=32, # Batch size
class_mode='categorical', # Use 'categorical' for one-hot encoded labels
shuffle=False, # Set to False to keep data in the same order
seed=123 # Set a seed for reproducibility
)
# Extract filenames and class labels
filenames = loaded_images.filenames
labels = loaded_images.classes
# Create DataFrame
df = pd.DataFrame({'image_file':filenames, 'label':labels})
return df
# Function to preprocess and grayscale image
def load_and_preprocess_images(df, directory, target_size):
# Initialize a list to hold the image arrays
image_arrays = []
for file_name in df['image_file']:
# Generate the full file path for the image
file_path = os.path.join(directory, file_name)
# Load the image file, ensuring it is in grayscale
image = load_img(file_path, color_mode='grayscale', target_size=target_size)
# Convert the image to a numpy array
image_array = img_to_array(image)
# Append the image data to the list
image_arrays.append(image_array)
# Convert the list of arrays into a 3D numpy array
X = np.stack(image_arrays, axis=0)
# Get the labels from the dataframe and one-hot encode them
y = df['label'].values
# Return the image data and the labels
return X, y
# Plot the loss and accuracy curve
def plot_loss_curve(history):
# Convert history to a DataFrame
history = pd.DataFrame(history)
epochs = list(range(1, len(history) + 1))
# Create two subplots: one for loss and one for accuracy
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(20, 10))
# Plot training and validation loss on the first subplot
ax1.scatter(epochs, history["loss"], label="Training Loss")
ax1.plot(epochs, history["loss"], label="Training Loss")
ax1.scatter(epochs, history["val_loss"], label="Validation Loss")
ax1.plot(epochs, history["val_loss"], label="Validation Loss")
ax1.set_title("Loss Curve")
ax1.set_xlabel("Epochs")
ax1.set_ylabel("Loss")
ax1.legend()
# Plot training and validation accuracy on the second subplot
ax2.scatter(epochs, history["accuracy"], label="Training Accuracy")
ax2.plot(epochs, history["accuracy"], label="Training Accuracy")
ax2.scatter(epochs, history["val_accuracy"], label="Validation Accuracy")
ax2.plot(epochs, history["val_accuracy"], label="Validation Accuracy")
ax2.set_title("Accuracy Curve")
ax2.set_xlabel("Epochs")
ax2.set_ylabel("Accuracy")
ax2.legend()
return fig
# Function to compile results of the model
def compile_results(model_history, model_name, batch_size, results=None):
best_val_idx = np.argmax(model_history["val_accuracy"])
result = {
"Model Name": model_name,
"Epochs": len(model_history["loss"]),
"Batch Size": batch_size,
"Train Loss": model_history["loss"][best_val_idx],
"Val Loss": model_history["val_loss"][best_val_idx],
"Train Acc": model_history["accuracy"][best_val_idx],
"Val Acc": model_history["val_accuracy"][best_val_idx],
"[Train - Val] Acc": model_history["accuracy"][best_val_idx] - model_history["val_accuracy"][best_val_idx]
}
# Append results to a DataFrame
result_df = pd.DataFrame([result])
# Append to existing DataFrame or create new one
if results is not None:
return pd.concat([results, result_df], ignore_index=True)
else:
return result_df
# Create an initial DataFrame and set it to None
result_df = None
Let's take a look at the dataset provided. The dataset folder contains 15 classes of different vegetables, segmented into train, test, and validation folders.
label : The true class of the image, represented as an integer ranging from 0 to 14*.
\* Each number represents a different vegetable item in the dataset :
| Class (0 - 14) | Types of Vegetables |
|---|---|
| 0 | Bean |
| 1 | Bitter Gourd |
| 2 | Bottle Gourd |
| 3 | Brinjal |
| 4 | Broccoli |
| 5 | Cabbage |
| 6 | Capsicum |
| 7 | Carrot |
| 8 | Cauliflower |
| 9 | Cucumber |
| 10 | Papaya |
| 11 | Potato |
| 12 | Pumpkin |
| 13 | Radish |
| 14 | Tomato |
load_images from keras.preprocessing to access the images in the respective folders.# Loading training, validation and testing datasets
# Define folder paths
train_data_dir = 'Datasets/Dataset for CA1 part A/train'
test_data_dir = 'Datasets/Dataset for CA1 part A/test'
val_data_dir = 'Datasets/Dataset for CA1 part A/validation'
print('Overview of Initial Data Import for Training, Testing and Validation:\n')
# Load the images in 224 x 224 full-color images
print("224 x 224 px Colored Image Data :")
train_df = load_images(train_data_dir)
val_df = load_images(val_data_dir)
test_df = load_images(test_data_dir)
# View an overview of a subset of the dataset
print("\nExample of Image Dataframe Generated (224 x 224 px) :")
train_df.head()
Overview of Initial Data Import for Training, Testing and Validation: 224 x 224 px Colored Image Data : Found 9028 images belonging to 15 classes. Found 3000 images belonging to 15 classes. Found 3000 images belonging to 15 classes. Example of Image Dataframe Generated (224 x 224 px) :
| image_file | label | |
|---|---|---|
| 0 | Bean\0026.jpg | 0 |
| 1 | Bean\0028.jpg | 0 |
| 2 | Bean\0029.jpg | 0 |
| 3 | Bean\0030.jpg | 0 |
| 4 | Bean\0034.jpg | 0 |
Note : The images imported will be resized and grayscaled in the later steps during pre-processing to account for the two required image sizes.
To check for missing values, use .isnull().sum() to identify if there is missing / null data in the dataset.
print(train_df.isnull().sum().sum())
print(test_df.isnull().sum().sum())
print(val_df.isnull().sum().sum())
0 0 0
IMAGE SIZE : 31 X 31 PX
# Plotting 15 random images from train_df in 31 by 31 px
fig, axes = plt.subplots(3, 5, figsize=(10, 6)) # Set up a 3x5 grid of subplots
axes = axes.flatten() # Flatten the grid for easy iteration
# Generate 15 random indices from the length of your dataframe
random_indices = np.random.choice(len(train_df), size=15, replace=False)
# Loop through the randomly selected image file paths and display them
for ax, idx in zip(axes, random_indices):
file_name = train_df['image_file'].iloc[idx]
file_path = os.path.join(train_data_dir, file_name)
# Display the images in 31 by 31 px
image = load_img(file_path, target_size=(31, 31))
ax.imshow(image)
plt.tight_layout()
plt.show()
IMAGE SIZE : 128 X 128 PX
# Plotting 15 random images from train_df in 128 by 128 px
fig, axes = plt.subplots(3, 5, figsize=(10, 6)) # Set up a 3x5 grid of subplots
axes = axes.flatten() # Flatten the grid for easy iteration
# Generate 15 random indices from the length of your dataframe
random_indices = np.random.choice(len(train_df), size=15, replace=False)
# Loop through the randomly selected image file paths and display them
for ax, idx in zip(axes, random_indices):
file_name = train_df['image_file'].iloc[idx]
file_path = os.path.join(train_data_dir, file_name)
# Display the images in 31 by 31 px
image = load_img(file_path, target_size=(128, 128))
ax.imshow(image)
plt.tight_layout()
plt.show()
X represents the images and y represents the image labels.Here, we will also make use of the load_and_preprocess_images function, which will enable us to resize the images to 31 by 31px and 128 by 128px and grayscale the images as well. (Refer to the load_and_preprocess_images function above for more detailed information).
# Splitting dataframe into x and y
# Load and preprocess the images for 31x31 images
X_train_31, y_train_31 = load_and_preprocess_images(train_df, train_data_dir, (31, 31))
X_val_31, y_val_31 = load_and_preprocess_images(val_df, val_data_dir, (31, 31))
X_test_31, y_test_31 = load_and_preprocess_images(test_df, test_data_dir, (31, 31))
# Repeat the process for the 128x128 images
X_train_128, y_train_128 = load_and_preprocess_images(train_df, train_data_dir, (128, 128))
X_val_128, y_val_128 = load_and_preprocess_images(val_df, val_data_dir, (128, 128))
X_test_128, y_test_128 = load_and_preprocess_images(test_df, test_data_dir, (128, 128))
VIEWING THE SHAPE OF THE ARRAY TO CHECK SIZING AND GRAYSCALING
# View the shape of the training dataset
print(f'Shape of X_train_31: {X_train_31.shape}')
print(f'Shape of X_train_128: {X_train_128.shape}')
print(f'Shape of X_val_31: {X_val_31.shape}')
print(f'Shape of X_val_128: {X_val_128.shape}')
print(f'Shape of X_test_31: {X_test_31.shape}')
print(f'Shape of X_test_128: {X_test_128.shape}')
Shape of X_train_31: (9028, 31, 31, 1) Shape of X_train_128: (9028, 128, 128, 1) Shape of X_val_31: (3000, 31, 31, 1) Shape of X_val_128: (3000, 128, 128, 1) Shape of X_test_31: (3000, 31, 31, 1) Shape of X_test_128: (3000, 128, 128, 1)
# Instantiate classes of vegetables
class_labels = {
0: "Bean",
1: "Bitter_Gourd",
2: "Bottle_Gourd",
3: "Brinjal",
4: "Broccoli",
5: "Cabbage",
6: "Capsicum",
7: "Carrot",
8: "Cauliflower",
9: "Cucumber",
10: "Papaya",
11: "Potato",
12: "Pumpkin",
13: "Radish",
14: "Tomato"
}
NUM_CLASS = 15
Based on the pre-defined classes, we can now visualize a sample set of the images and see what they look like.
IMAGE SIZE : 31 X 31 PX
# Visualizing 25 randomly selected images in 31 by 31 px
# Set the images axes and figure size
fig, axes = plt.subplots(5, 5, figsize=(10, 10), tight_layout=True)
# Generate 25 random indices to select images
random_indices = np.random.choice(len(X_train_31), 25, replace=False)
# Loop through the randomly selected indices
for i, ax in enumerate(axes.ravel()):
if i < len(random_indices):
# Get the index for the current random image
index = random_indices[i]
# Display the image
ax.imshow(X_train_31[index], cmap='Greys')
ax.set_title(f"{class_labels[y_train_31[index]]}")
# Display the images
plt.show()
IMAGE SIZE : 128 X 128PX
# Visualizing 25 randomly selected images in 128 by 128 px
# Set the images axes and figure size
fig, axes = plt.subplots(5, 5, figsize=(10, 10), tight_layout=True)
# Generate 25 random indices to select images
random_indices = np.random.choice(len(X_train_128), 25, replace=False)
# Loop through the randomly selected indices
for i, ax in enumerate(axes.ravel()):
if i < len(random_indices):
# Get the index for the current random image
index = random_indices[i]
# Display the image
ax.imshow(X_train_128[index], cmap='Greys')
ax.set_title(f"{class_labels[y_train_128[index]]}")
# Display the images
plt.show()
ANALYZING THE IMAGE DISPLAY FOR 31 BY 31 AND 128 BY 128 PIXELS
From both image sets, we see that the smaller the image size (more scaled down), the more pixelated it becomes because a smaller number of pixels is used to represent the same images.
Fine details from the images may have been lost or become less distinguishable in the downscaled images, and could potentially be more difficult for the model to classify images accurately.
However, to confirm this, we will conduct further tests and evaluate various deep learning models to determine the validity of the above statement.
# Visualizing class distributions
# Count the images for each label
labels, counts = np.unique(y_train_31, return_counts = True)
for label, count in zip(labels, counts):
print(f"{class_labels[label]}: {count}")
# Display a barchart displaying the counts
plt.barh(labels, counts, tick_label=list(class_labels.values()))
plt.show()
Bean: 780 Bitter_Gourd: 720 Bottle_Gourd: 441 Brinjal: 868 Broccoli: 750 Cabbage: 503 Capsicum: 351 Carrot: 256 Cauliflower: 587 Cucumber: 812 Papaya: 566 Potato: 377 Pumpkin: 814 Radish: 248 Tomato: 955
ANALYSIS OF THE CLASS DISTRIBTIONS
From the bar graph and count displayed, it is apparent that there is a certain degree of imbalance within the image classes. For example, tomato has close to 1000 images, while carrot has fewer than 400 images.
This means class oversampling will be necessary to ensure more balanced classes, which will be performed later in pre-processing and feature engineering.
# Visualizing the pixel values
print("Pixel Values:")
print("Max: ", np.max(X_train_31))
print("Min: ", np.min(X_train_31))
# Calculating the mean and standard deviation of the pixels
print("\nMean and Standard Deviation of Pixels for 31 by 31 px Images:")
print("Mean: ", np.mean(X_train_31))
print("Std: ", np.std(X_train_31))
print("\nMean and Standard Deviation of Pixels for 128 by 128 px Images:")
print("Mean: ", np.mean(X_train_128))
print("Std: ", np.std(X_train_128))
Pixel Values: Max: 255.0 Min: 0.0 Mean and Standard Deviation of Pixels for 31 by 31 px Images: Mean: 114.84767 Std: 56.12739 Mean and Standard Deviation of Pixels for 128 by 128 px Images: Mean: 114.83986 Std: 56.12101
ANALYSIS OF THE PIXEL DISTRIBUTIONS
Based on the pixel values, as expected, the pixel distribution ranges from 0 (black) to 255 (white) for grayscale images.
Looking at the mean pixel value of 114.84, we find that it falls approximately in the middle of the grayscale range, indicating that the images are not extremely dark but lean towards the darker end on this scale.
The standard deviation of 56.1 indicates a notable degree of variation in pixel values across the image, suggesting that there is significant contrast in the image, with pixels being both lighter and darker than the average.
# Visualizing the distribution of brightness for 128px images
flattened_images_128 = X_train_128.reshape(X_train_128.shape[0], -1)
masked_images_128 = np.ma.masked_equal(flattened_images_128, 0)
average_brightness_128 = masked_images_128.mean(axis=1)
# Visualizing the distribution of brightness for 31px images
flattened_images_31 = X_train_31.reshape(X_train_31.shape[0], -1)
masked_images_31 = np.ma.masked_equal(flattened_images_31, 0)
average_brightness_31 = masked_images_31.mean(axis=1)
# Create a figure with two subplots side by side
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 5))
# Plot the distribution of average brightness for 128px images
sns.histplot(average_brightness_128.compressed(), ax=ax1, kde=True)
ax1.set_xlabel('Average Brightness per Image (128px)')
ax1.set_title('Distribution of Average Brightness (128px)')
# Plot the distribution of average brightness for 31px images
sns.histplot(average_brightness_31.compressed(), ax=ax2, kde=True)
ax2.set_xlabel('Average Brightness per Image (31px)')
ax2.set_title('Distribution of Average Brightness (31px)')
# Show the plot
plt.tight_layout()
plt.show()
In the pre-processing and analysis of images, these are some things we will pre-process and analyse :
RandomOverSampler() to generate synthetic samples for the minority classes to ensure a more even distribution of classes.all as there are multiple classes to perform the oversampling on.# Handle class imbalance using randomoversampler
# Initialize RandomOverSampler
oversampler = RandomOverSampler(sampling_strategy='all', random_state=42)
# Reshape the 4D images to 2D vectors
X_train_31_reshaped = X_train_31.reshape((X_train_31.shape[0], -1))
X_train_128_reshaped = X_train_128.reshape((X_train_128.shape[0], -1))
# Oversampling for 31px and 128px images
X_train_31_resampled, y_train_31_resampled = oversampler.fit_resample(X_train_31_reshaped, y_train_31)
X_train_128_resampled, y_train_128_resampled = oversampler.fit_resample(X_train_128_reshaped, y_train_128)
# Reshape the resampled data back to its original 4D shape
X_train_31_resampled = X_train_31_resampled.reshape((-1, X_train_31.shape[1], X_train_31.shape[2], X_train_31.shape[3]))
X_train_128_resampled = X_train_128_resampled.reshape((-1, X_train_128.shape[1], X_train_128.shape[2], X_train_128.shape[3]))
DISPLAY VISUAL AND TEXTUAL REPRESENTATION OF THE OVERSAMPLING RESULTS
dominant class, and this provides a more balanced representation of the various image classes.# Display a visual representation of the oversampled classes
# Print the shape of the new oversampled classes
print('Shape of Arrays After Oversampling:\n===============================================')
print(f'Shape of 31 x 31 px Array: {X_train_31_resampled.shape}')
print(f'Shape of 128 x 128 px Array: {X_train_128_resampled.shape}')
# Visualizing class distributions before and after oversampling
fig, axes = plt.subplots(2, 2, figsize=(12, 10))
# 31x31 images - Before and After Oversampling
labels_31, counts_31 = np.unique(y_train_31, return_counts=True)
axes[0, 0].barh(labels_31, counts_31, tick_label=list(class_labels.values()))
axes[0, 0].set_title("Class Distribution (Before Oversampling) - 31x31 Images")
labels_31_resampled, counts_31_resampled = np.unique(y_train_31_resampled, return_counts=True)
axes[0, 1].barh(labels_31_resampled, counts_31_resampled, tick_label=list(class_labels.values()))
axes[0, 1].set_title("Class Distribution (After Random Oversampling) - 31x31 Images")
# 128x128 images - Before and After Oversampling
labels_128, counts_128 = np.unique(y_train_128, return_counts=True)
axes[1, 0].barh(labels_128, counts_128, tick_label=list(class_labels.values()))
axes[1, 0].set_title("Class Distribution (Before Oversampling) - 128x128 Images")
labels_128_resampled, counts_128_resampled = np.unique(y_train_128_resampled, return_counts=True)
axes[1, 1].barh(labels_128_resampled, counts_128_resampled, tick_label=list(class_labels.values()))
axes[1, 1].set_title("Class Distribution (After Random Oversampling) - 128x128 Images")
plt.tight_layout()
plt.show()
Shape of Arrays After Oversampling: =============================================== Shape of 31 x 31 px Array: (14325, 31, 31, 1) Shape of 128 x 128 px Array: (14325, 128, 128, 1)
We will first perform image averaging. This involves stacking of multiple photos on top of each other and averaging them together. We do this to see the noise of the image as well as observe all images in the dataset.
# Calculate the mean of all images in X_train_31 and X_train_128
mean_image_31 = np.mean(X_train_31, axis=0) / 255
mean_image_128 = np.mean(X_train_128, axis=0) / 255
# Create a figure with two subplots side by side
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 5))
# Plot the mean image for 31x31 images
ax1.imshow(mean_image_31, cmap='Greys')
ax1.set_title('Average of All Images (31x31)')
# Plot the mean image for 128x128 images
ax2.imshow(mean_image_128, cmap='Greys')
ax2.set_title('Average of All Images (128x128)')
# Show the plot
plt.tight_layout()
plt.show()
When examining the average images for each class, it becomes apparent that predicting their content can be challenging. The input data provided contains multiple objects in the same image, which makes the process of averaging more complex.
Nevertheless, by referring to the original image data shown above, we can identify some of the objects depicted in the average images. Notably, items such as papaya, cucumber, broccoli, brinjal, and a few others are discernible.
Surprisingly, the average images for each class label appear more discernible when using 31 x 31 px image sizes, as the smaller pixels seem to capture the key features of the objects with greater clarity. However, this increased clarity may come at the cost of reduced detail for larger objects or scenes. On the other hand, the 128 x 128 px images provide a higher level of detail but may struggle to capture distinct features when multiple objects are present in the same image.
# View the average image for every class label
# Redefine the resampled variables
y_train_31_avg = y_train_31_resampled
y_train_128_avg = y_train_128_resampled
# Set figure and axes for 31x31 images
fig, ax = plt.subplots(3, 5, figsize=(20, 10))
# Loop through the labels for 31x31 images
for idx, subplot in enumerate(ax.ravel()):
avg_image_31 = np.mean(X_train_31_resampled[np.squeeze(y_train_31_avg == idx)], axis=0) / 255
subplot.imshow(avg_image_31, cmap='Greys')
subplot.set_title(f"Average {class_labels[idx]} (31x31)")
subplot.axis('off')
# Set figure and axes for 128x128 images
fig, ax = plt.subplots(3, 5, figsize=(20, 10))
# Loop through the labels for 128x128 images
for idx, subplot in enumerate(ax.ravel()):
avg_image_128 = np.mean(X_train_128_resampled[np.squeeze(y_train_128_avg == idx)], axis=0) / 255
subplot.imshow(avg_image_128, cmap='Greys')
subplot.set_title(f"Average {class_labels[idx]} (128x128)")
subplot.axis('off')
plt.tight_layout()
plt.show()
# Performing one-hot encoding on labels
# For 31x31px
y_train_31 = to_categorical(y_train_31_resampled)
y_test_31 = to_categorical(y_test_31)
y_val_31 = to_categorical(y_val_31)
# For 128x128px
y_train_128 = to_categorical(y_train_128_resampled)
y_test_128 = to_categorical(y_test_128)
y_val_128 = to_categorical(y_val_128)
# Print the output
print(f'31 x 31 PX : {y_train_31[0]}')
print("Label: ", tf.argmax(y_train_31[0]))
print(f'\n128 x 128 PX : {y_train_128[0]}')
print("Label: ", tf.argmax(y_train_128[0]))
31 x 31 PX : [1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] Label: tf.Tensor(0, shape=(), dtype=int64) 128 x 128 PX : [1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] Label: tf.Tensor(0, shape=(), dtype=int64)
# Apply normalization to the training, testing and validation for 31 and 128 px
# Normalize 31x31 images
X_train_31 = X_train_31_resampled / 255.0
X_val_31 = X_val_31 / 255.0
X_test_31 = X_test_31 / 255.0
# Normalize 128x128 images
X_train_128 = X_train_128_resampled / 255.0
X_val_128 = X_val_128 / 255.0
X_test_128= X_test_128 / 255.0
# Plot image to show before and after normalization for training data
fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(8, 8))
# 31x31 images
ax[0, 0].imshow(X_train_31_resampled[0], cmap='Greys')
ax[0, 0].set_title("Before Normalization - 31px")
ax[0, 1].imshow(X_train_31[0], cmap='Greys')
ax[0, 1].set_title("After Normalization - 31px")
# 128x128 images
ax[1, 0].imshow(X_train_128_resampled[0], cmap='Greys')
ax[1, 0].set_title("Before Normalization - 128px")
ax[1, 1].imshow(X_train_128[0], cmap='Greys')
ax[1, 1].set_title("After Normalization - 128px")
plt.tight_layout()
plt.show()
To prevent possible overfitting of the model, we will apply data augmentation. It is a method commonly used to reduce the variance of a model by imposing random transformations on the data during training.
Types of Image Data Augmentations : Flipping / Cropping / Rotating / Shearing ....
For our dataset, as seen from EDA earlier, we see that some images vary in brightness, so we will apply random brightness in our augmentation to standardize the brightness level.
We will also apply random flip, random crop and random rotation as after oversampling, we want to increase the variation in image appearances to have more data points. From our analysis earlier, we see that many images follow a similar orientation to each other, hence we want to diversify the orientation by adding augmentation. Cropping the images also allows the model to better generalize the data and identify key features more easily.
Note : We will only augment training data as we do not want to edit the validation and test data, as they are being used for evaluation of the model's accuracy.
# Augment the image data based on EDA earlier
def data_augmentation(X_train, IMG_SIZE):
imageArr = []
for images in X_train:
image = tf.convert_to_tensor(images, dtype=tf.float32)
randomVal = np.random.randint(0, 2)
# Apply random brightness variation with a chance of 50%
if np.random.rand() < 0.5:
image = tf.image.random_brightness(image, max_delta=0.2)
# Apply random flip and random crop
if randomVal == 1:
image = tf.image.random_flip_left_right(image)
image = tf.image.resize_with_crop_or_pad(image, IMG_SIZE[0] + 4, IMG_SIZE[1] + 4)
image = tf.image.random_crop(image, size=IMG_SIZE)
# Apply random rotation
if np.random.rand() < 0.5:
# Rotate by a random angle between -30 to +30 degrees
angle = np.random.uniform(-30, 30)
image = tf.image.rot90(image, k=int(angle / 90))
# Adding the image to the list
imageArr.append(image)
return np.array(imageArr)
IMAGE SIZE : 31 X 31 PX
IMG_SIZE = (31, 31, 1)
X_train_31_aug = np.copy(X_train_31)
X_train_31_aug = data_augmentation(X_train_31_aug, IMG_SIZE)
Let's now take a look at the augmented images.
Based on the results shown below, we can see that the images were successfully augmented.
# Setup the figure and axes
fig, ax = plt.subplots(2, 10, figsize=(20, 4))
for idx in range(10): # Loop through the first 10 classes
# Display original images
original_image_idx = np.where(y_train_31_resampled == idx)[0][0]
ax[0, idx].set_title(f"O: {class_labels[idx]}")
ax[0, idx].imshow(X_train_31_resampled[original_image_idx].squeeze(), cmap='gray')
ax[0, idx].axis("off")
# Display augmented images
augmented_image_idx = np.where(y_train_31_resampled == idx)[0][0]
ax[1, idx].set_title(f"A: {class_labels[idx]}")
ax[1, idx].imshow(X_train_31_aug[augmented_image_idx].squeeze(), cmap='gray')
# Display the plot
plt.tight_layout()
plt.show()
IMAGE SIZE : 128 X 128 PX
IMG_SIZE = (128, 128, 1)
X_train_128_aug = np.copy(X_train_128)
X_train_128_aug = data_augmentation(X_train_128_aug, IMG_SIZE)
Let's now take a look at the augmented images.
Based on the results shown below, we can see that the images were successfully augmented.
# Setup the figure and axes
fig, ax = plt.subplots(2, 10, figsize=(20, 4))
for idx in range(10): # Loop through the first 10 classes
# Display original images
original_image_idx = np.where(y_train_31_resampled == idx)[0][0]
ax[0, idx].set_title(f"O: {class_labels[idx]}")
ax[0, idx].imshow(X_train_128_resampled[original_image_idx].squeeze(), cmap='gray')
ax[0, idx].axis("off")
# Display augmented images
augmented_image_idx = np.where(y_train_31_resampled == idx)[0][0]
ax[1, idx].set_title(f"A: {class_labels[idx]}")
ax[1, idx].imshow(X_train_128_aug[augmented_image_idx].squeeze(), cmap='gray')
ax[1, idx].axis("off")
# Display the plot
plt.tight_layout()
plt.show()
To solve this image classification task, we will be making use of a few deep learning models.
We will be looking at 2 types of optimizers, Adam and SGD optimizers.
Adam : Adam Optimizer is a stochastic gradient descent method that is based on adaptive estimation of first-order and second-order moments.
SGD : SGD is known as stochastic gradient descent, which is an iterative method for optimizing an objective function with suitable smoothness.
Differences between Adam and SGD Optimizers
Adam optimizer is faster compared to SGD, as Adam uses coordinate-wise gradient clipping, tackling heavy-tailed noise. This also updates the learning rate for each network weight individually. However, SGD is known to perform better than Adam for image classification tasks. Adam optimizer tends to take 'shortcuts', which is better for NLP and other machine learning purposes, but for image classification, every detail is important to distinguish what the image is. So with SGD being more detailed, we will choose to use SGD optimizer for all our subsequent model training.
Chosen Optimizer : SGD Optimizer
def build_baseline_model_31(X_train, NUM_CLASS, model_name):
# Clear the previous tensorflow session
tf.keras.backend.clear_session()
# Define the input shape and create the input tensor
input_shape = (X_train.shape[1], X_train.shape[2], X_train.shape[3])
inputs = Input(shape=input_shape)
# Building the model
x = Flatten()(inputs)
x = Dense(128, activation='relu')(x) # Hidden layer 1
x = Dense(128, activation='relu')(x) # Hidden layer 2
x = Dense(128, activation='relu')(x) # Hidden layer 3
x = Dense(NUM_CLASS, activation='softmax')(x) # Output layer
# Creating the model
model = Model(inputs=inputs, outputs=x, name=model_name)
# Compiling the model
model.compile(optimizer=SGD(learning_rate=0.01, momentum=0.9), loss='categorical_crossentropy', metrics=['accuracy'])
# Print model summary
model.summary()
return model
def build_baseline_model_128(X_train, NUM_CLASS, model_name):
# Clear the previous tensorflow session
tf.keras.backend.clear_session()
# Define the input shape and create the input tensor
input_shape = (X_train.shape[1], X_train.shape[2], X_train.shape[3])
inputs = Input(shape=input_shape)
# Building the model
x = Flatten()(inputs)
x = Dense(256, activation='relu')(x) # Hidden layer 1
x = Dense(256, activation='relu')(x) # Hidden layer 2
x = Dense(256, activation='relu')(x) # Hidden layer 3
x = Dense(NUM_CLASS, activation='softmax')(x) # Output layer
# Creating the model
model = Model(inputs=inputs, outputs=x, name=model_name)
# Compiling the model
model.compile(optimizer=SGD(learning_rate=0.01, momentum=0.9), loss='categorical_crossentropy', metrics=['accuracy'])
# Print model summary
model.summary()
return model
To train the baseline model, we will first use our unaugmented data to train and fit the model. After that, we will retrain the same model with augmented data and compare the differences in performance.
For this model, we will be testing with both image sizes and evaluate which type of image can perform better using the same model parameters.
IMAGE SIZE : 31 X 31 PX
baseline = build_baseline_model_31(X_train_31, NUM_CLASS, model_name="Baseline31")
Model: "Baseline31"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 31, 31, 1)] 0
flatten (Flatten) (None, 961) 0
dense (Dense) (None, 128) 123136
dense_1 (Dense) (None, 128) 16512
dense_2 (Dense) (None, 128) 16512
dense_3 (Dense) (None, 15) 1935
=================================================================
Total params: 158,095
Trainable params: 158,095
Non-trainable params: 0
_________________________________________________________________
FITTING THE DATA TO THE BASELINE MODEL
Fit the data using model.fit. Apply early stopping to the model as well to try and prevent overfitting of data.
baseModelHistory31 = baseline.fit(X_train_31, y_train_31, epochs=50, validation_data=(X_val_31, y_val_31), batch_size=32, callbacks=EarlyStopping(monitor='val_accuracy', patience=10, restore_best_weights=True))
Epoch 1/50 448/448 [==============================] - 3s 4ms/step - loss: 2.5184 - accuracy: 0.1655 - val_loss: 2.3514 - val_accuracy: 0.2527 Epoch 2/50 448/448 [==============================] - 2s 4ms/step - loss: 2.2231 - accuracy: 0.2838 - val_loss: 2.2833 - val_accuracy: 0.2537 Epoch 3/50 448/448 [==============================] - 2s 4ms/step - loss: 2.0092 - accuracy: 0.3587 - val_loss: 2.0226 - val_accuracy: 0.3693 Epoch 4/50 448/448 [==============================] - 2s 4ms/step - loss: 1.8078 - accuracy: 0.4253 - val_loss: 1.9989 - val_accuracy: 0.3760 Epoch 5/50 448/448 [==============================] - 2s 4ms/step - loss: 1.6080 - accuracy: 0.4877 - val_loss: 1.8646 - val_accuracy: 0.4333 Epoch 6/50 448/448 [==============================] - 2s 4ms/step - loss: 1.4484 - accuracy: 0.5420 - val_loss: 1.6921 - val_accuracy: 0.4863 Epoch 7/50 448/448 [==============================] - 2s 4ms/step - loss: 1.3158 - accuracy: 0.5800 - val_loss: 1.7721 - val_accuracy: 0.4680 Epoch 8/50 448/448 [==============================] - 2s 4ms/step - loss: 1.2262 - accuracy: 0.6093 - val_loss: 1.7829 - val_accuracy: 0.4853 Epoch 9/50 448/448 [==============================] - 2s 4ms/step - loss: 1.1312 - accuracy: 0.6353 - val_loss: 1.6880 - val_accuracy: 0.4990 Epoch 10/50 448/448 [==============================] - 2s 4ms/step - loss: 1.0002 - accuracy: 0.6791 - val_loss: 1.5934 - val_accuracy: 0.5477 Epoch 11/50 448/448 [==============================] - 2s 4ms/step - loss: 0.9329 - accuracy: 0.7016 - val_loss: 1.6199 - val_accuracy: 0.5400 Epoch 12/50 448/448 [==============================] - 2s 4ms/step - loss: 0.8421 - accuracy: 0.7308 - val_loss: 1.7382 - val_accuracy: 0.5240 Epoch 13/50 448/448 [==============================] - 2s 4ms/step - loss: 0.7495 - accuracy: 0.7549 - val_loss: 1.6340 - val_accuracy: 0.5710 Epoch 14/50 448/448 [==============================] - 2s 4ms/step - loss: 0.7146 - accuracy: 0.7696 - val_loss: 1.8361 - val_accuracy: 0.5353 Epoch 15/50 448/448 [==============================] - 2s 4ms/step - loss: 0.6581 - accuracy: 0.7867 - val_loss: 1.8737 - val_accuracy: 0.5297 Epoch 16/50 448/448 [==============================] - 2s 4ms/step - loss: 0.5926 - accuracy: 0.8083 - val_loss: 1.7511 - val_accuracy: 0.5627 Epoch 17/50 448/448 [==============================] - 2s 4ms/step - loss: 0.5751 - accuracy: 0.8132 - val_loss: 1.6412 - val_accuracy: 0.6053 Epoch 18/50 448/448 [==============================] - 2s 4ms/step - loss: 0.5229 - accuracy: 0.8296 - val_loss: 1.7865 - val_accuracy: 0.5840 Epoch 19/50 448/448 [==============================] - 2s 4ms/step - loss: 0.5301 - accuracy: 0.8281 - val_loss: 1.7310 - val_accuracy: 0.6027 Epoch 20/50 448/448 [==============================] - 2s 4ms/step - loss: 0.4702 - accuracy: 0.8485 - val_loss: 2.2630 - val_accuracy: 0.5470 Epoch 21/50 448/448 [==============================] - 2s 4ms/step - loss: 0.4535 - accuracy: 0.8529 - val_loss: 2.2550 - val_accuracy: 0.5520 Epoch 22/50 448/448 [==============================] - 2s 4ms/step - loss: 0.4387 - accuracy: 0.8556 - val_loss: 1.8305 - val_accuracy: 0.6077 Epoch 23/50 448/448 [==============================] - 2s 4ms/step - loss: 0.3711 - accuracy: 0.8780 - val_loss: 2.1347 - val_accuracy: 0.5900 Epoch 24/50 448/448 [==============================] - 2s 4ms/step - loss: 0.4111 - accuracy: 0.8675 - val_loss: 1.9013 - val_accuracy: 0.5987 Epoch 25/50 448/448 [==============================] - 2s 4ms/step - loss: 0.3709 - accuracy: 0.8775 - val_loss: 1.9946 - val_accuracy: 0.6157 Epoch 26/50 448/448 [==============================] - 2s 4ms/step - loss: 0.3283 - accuracy: 0.8931 - val_loss: 1.9302 - val_accuracy: 0.6097 Epoch 27/50 448/448 [==============================] - 2s 4ms/step - loss: 0.3263 - accuracy: 0.8940 - val_loss: 2.1808 - val_accuracy: 0.6050 Epoch 28/50 448/448 [==============================] - 2s 4ms/step - loss: 0.3454 - accuracy: 0.8864 - val_loss: 2.0976 - val_accuracy: 0.5973 Epoch 29/50 448/448 [==============================] - 2s 4ms/step - loss: 0.2800 - accuracy: 0.9073 - val_loss: 2.3133 - val_accuracy: 0.5960 Epoch 30/50 448/448 [==============================] - 2s 4ms/step - loss: 0.2698 - accuracy: 0.9094 - val_loss: 2.1424 - val_accuracy: 0.6217 Epoch 31/50 448/448 [==============================] - 2s 4ms/step - loss: 0.3278 - accuracy: 0.8926 - val_loss: 2.3314 - val_accuracy: 0.5947 Epoch 32/50 448/448 [==============================] - 2s 4ms/step - loss: 0.2897 - accuracy: 0.9032 - val_loss: 2.6948 - val_accuracy: 0.6077 Epoch 33/50 448/448 [==============================] - 2s 4ms/step - loss: 0.2991 - accuracy: 0.9042 - val_loss: 2.3448 - val_accuracy: 0.5957 Epoch 34/50 448/448 [==============================] - 2s 4ms/step - loss: 0.2983 - accuracy: 0.9020 - val_loss: 2.2973 - val_accuracy: 0.6153 Epoch 35/50 448/448 [==============================] - 2s 4ms/step - loss: 0.2553 - accuracy: 0.9188 - val_loss: 2.3035 - val_accuracy: 0.6330 Epoch 36/50 448/448 [==============================] - 2s 4ms/step - loss: 0.2417 - accuracy: 0.9203 - val_loss: 2.5879 - val_accuracy: 0.5947 Epoch 37/50 448/448 [==============================] - 2s 4ms/step - loss: 0.2165 - accuracy: 0.9287 - val_loss: 2.6937 - val_accuracy: 0.6233 Epoch 38/50 448/448 [==============================] - 2s 4ms/step - loss: 0.2287 - accuracy: 0.9247 - val_loss: 2.7289 - val_accuracy: 0.6070 Epoch 39/50 448/448 [==============================] - 2s 4ms/step - loss: 0.2109 - accuracy: 0.9297 - val_loss: 2.6491 - val_accuracy: 0.5793 Epoch 40/50 448/448 [==============================] - 2s 4ms/step - loss: 0.2356 - accuracy: 0.9233 - val_loss: 2.8123 - val_accuracy: 0.6057 Epoch 41/50 448/448 [==============================] - 2s 4ms/step - loss: 0.2021 - accuracy: 0.9314 - val_loss: 2.7838 - val_accuracy: 0.6230 Epoch 42/50 448/448 [==============================] - 2s 4ms/step - loss: 0.2509 - accuracy: 0.9208 - val_loss: 2.6179 - val_accuracy: 0.5997 Epoch 43/50 448/448 [==============================] - 2s 4ms/step - loss: 0.2614 - accuracy: 0.9158 - val_loss: 2.6506 - val_accuracy: 0.6097 Epoch 44/50 448/448 [==============================] - 2s 4ms/step - loss: 0.2301 - accuracy: 0.9250 - val_loss: 2.6625 - val_accuracy: 0.6413 Epoch 45/50 448/448 [==============================] - 2s 4ms/step - loss: 0.3232 - accuracy: 0.9010 - val_loss: 2.8400 - val_accuracy: 0.6077 Epoch 46/50 448/448 [==============================] - 2s 4ms/step - loss: 0.1844 - accuracy: 0.9376 - val_loss: 2.9300 - val_accuracy: 0.6193 Epoch 47/50 448/448 [==============================] - 2s 4ms/step - loss: 0.1636 - accuracy: 0.9473 - val_loss: 2.7971 - val_accuracy: 0.6300 Epoch 48/50 448/448 [==============================] - 2s 4ms/step - loss: 0.1392 - accuracy: 0.9546 - val_loss: 3.0945 - val_accuracy: 0.6230 Epoch 49/50 448/448 [==============================] - 2s 4ms/step - loss: 0.1254 - accuracy: 0.9594 - val_loss: 2.7134 - val_accuracy: 0.6530 Epoch 50/50 448/448 [==============================] - 2s 4ms/step - loss: 0.1884 - accuracy: 0.9387 - val_loss: 3.4375 - val_accuracy: 0.5837
EXTRACT THE TRAINING HISTORY OF THE BASE MODEL INTO A DICTIONARY
.history to extract the training data from the base model.pd.concat() to display the results.baseModelHistory31 = baseModelHistory31.history
# Get model results
result_df = compile_results(baseModelHistory31, "BaselineModel31", 32, result_df)
display(result_df.iloc[0])
Model Name BaselineModel31 Epochs 50 Batch Size 32 Train Loss 0.125429 Val Loss 2.713443 Train Acc 0.959372 Val Acc 0.653 [Train - Val] Acc 0.306372 Name: 0, dtype: object
PLOTTING THE LOSS AND ACCURACY CURVE
From the curves, we can see that while there is a significant decrease in training loss, and a significant increase in training accuracy, the pattern could not be seen from the validation data.
This implies that our model actually overfits the input data, which is why validation data performs so poorly. Thus, this already shows that a fully connected baseline model is not sufficient to classify the images well, hence more advanced techniques are needed.
plot_loss_curve(baseModelHistory31)
plt.show()
IMAGE SIZE : 128 X 128 PX
baseline = build_baseline_model_128(X_train_128, NUM_CLASS, model_name="Baseline128")
Model: "Baseline128"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 128, 128, 1)] 0
flatten (Flatten) (None, 16384) 0
dense (Dense) (None, 256) 4194560
dense_1 (Dense) (None, 256) 65792
dense_2 (Dense) (None, 256) 65792
dense_3 (Dense) (None, 15) 3855
=================================================================
Total params: 4,329,999
Trainable params: 4,329,999
Non-trainable params: 0
_________________________________________________________________
FITTING THE DATA TO THE BASELINE MODEL
Fit the data using model.fit. Apply early stopping to the model as well to try and prevent overfitting of data.
baseModelHistory128 = baseline.fit(X_train_128, y_train_128, epochs=50, validation_data=(X_val_128, y_val_128), batch_size=32, callbacks=EarlyStopping(monitor='val_accuracy', patience=10, restore_best_weights=True))
Epoch 1/50 448/448 [==============================] - 3s 5ms/step - loss: 2.6118 - accuracy: 0.1166 - val_loss: 2.5719 - val_accuracy: 0.1397 Epoch 2/50 448/448 [==============================] - 2s 4ms/step - loss: 2.5035 - accuracy: 0.1534 - val_loss: 2.6242 - val_accuracy: 0.1267 Epoch 3/50 448/448 [==============================] - 2s 4ms/step - loss: 2.4769 - accuracy: 0.1610 - val_loss: 2.4780 - val_accuracy: 0.1453 Epoch 4/50 448/448 [==============================] - 2s 4ms/step - loss: 2.4393 - accuracy: 0.1752 - val_loss: 2.4633 - val_accuracy: 0.1693 Epoch 5/50 448/448 [==============================] - 2s 4ms/step - loss: 2.4179 - accuracy: 0.1841 - val_loss: 2.4658 - val_accuracy: 0.1640 Epoch 6/50 448/448 [==============================] - 2s 4ms/step - loss: 2.4082 - accuracy: 0.1925 - val_loss: 2.4357 - val_accuracy: 0.1850 Epoch 7/50 448/448 [==============================] - 2s 4ms/step - loss: 2.3972 - accuracy: 0.1938 - val_loss: 2.4577 - val_accuracy: 0.1890 Epoch 8/50 448/448 [==============================] - 2s 4ms/step - loss: 2.3767 - accuracy: 0.2045 - val_loss: 2.5167 - val_accuracy: 0.1633 Epoch 9/50 448/448 [==============================] - 2s 4ms/step - loss: 2.3624 - accuracy: 0.2089 - val_loss: 2.4558 - val_accuracy: 0.2000 Epoch 10/50 448/448 [==============================] - 2s 4ms/step - loss: 2.3427 - accuracy: 0.2181 - val_loss: 2.4477 - val_accuracy: 0.1957 Epoch 11/50 448/448 [==============================] - 2s 4ms/step - loss: 2.3536 - accuracy: 0.2121 - val_loss: 2.5625 - val_accuracy: 0.1517 Epoch 12/50 448/448 [==============================] - 2s 4ms/step - loss: 2.3321 - accuracy: 0.2214 - val_loss: 2.5941 - val_accuracy: 0.1493 Epoch 13/50 448/448 [==============================] - 2s 4ms/step - loss: 2.3015 - accuracy: 0.2387 - val_loss: 2.4210 - val_accuracy: 0.2143 Epoch 14/50 448/448 [==============================] - 2s 4ms/step - loss: 2.2684 - accuracy: 0.2487 - val_loss: 2.4731 - val_accuracy: 0.2143 Epoch 15/50 448/448 [==============================] - 2s 4ms/step - loss: 2.2575 - accuracy: 0.2556 - val_loss: 2.4539 - val_accuracy: 0.1987 Epoch 16/50 448/448 [==============================] - 2s 4ms/step - loss: 2.2447 - accuracy: 0.2616 - val_loss: 2.4180 - val_accuracy: 0.2197 Epoch 17/50 448/448 [==============================] - 2s 4ms/step - loss: 2.1859 - accuracy: 0.2773 - val_loss: 2.4383 - val_accuracy: 0.2243 Epoch 18/50 448/448 [==============================] - 2s 4ms/step - loss: 2.1879 - accuracy: 0.2780 - val_loss: 2.3641 - val_accuracy: 0.2480 Epoch 19/50 448/448 [==============================] - 2s 4ms/step - loss: 2.1908 - accuracy: 0.2860 - val_loss: 2.4959 - val_accuracy: 0.2167 Epoch 20/50 448/448 [==============================] - 2s 4ms/step - loss: 2.1893 - accuracy: 0.2813 - val_loss: 2.3846 - val_accuracy: 0.2560 Epoch 21/50 448/448 [==============================] - 2s 4ms/step - loss: 2.1583 - accuracy: 0.2871 - val_loss: 2.4380 - val_accuracy: 0.2433 Epoch 22/50 448/448 [==============================] - 2s 4ms/step - loss: 2.1494 - accuracy: 0.2946 - val_loss: 2.4278 - val_accuracy: 0.2433 Epoch 23/50 448/448 [==============================] - 2s 4ms/step - loss: 2.1532 - accuracy: 0.2960 - val_loss: 2.4535 - val_accuracy: 0.2347 Epoch 24/50 448/448 [==============================] - 2s 4ms/step - loss: 2.1273 - accuracy: 0.2986 - val_loss: 2.4244 - val_accuracy: 0.2553 Epoch 25/50 448/448 [==============================] - 2s 4ms/step - loss: 2.1339 - accuracy: 0.3031 - val_loss: 2.4260 - val_accuracy: 0.2447 Epoch 26/50 448/448 [==============================] - 2s 4ms/step - loss: 2.1009 - accuracy: 0.3137 - val_loss: 2.4536 - val_accuracy: 0.2413 Epoch 27/50 448/448 [==============================] - 2s 4ms/step - loss: 2.0692 - accuracy: 0.3221 - val_loss: 2.3976 - val_accuracy: 0.2570 Epoch 28/50 448/448 [==============================] - 2s 4ms/step - loss: 2.0924 - accuracy: 0.3181 - val_loss: 2.4258 - val_accuracy: 0.2400 Epoch 29/50 448/448 [==============================] - 2s 4ms/step - loss: 2.0974 - accuracy: 0.3088 - val_loss: 2.4006 - val_accuracy: 0.2403 Epoch 30/50 448/448 [==============================] - 2s 4ms/step - loss: 2.0875 - accuracy: 0.3176 - val_loss: 2.5525 - val_accuracy: 0.2190 Epoch 31/50 448/448 [==============================] - 2s 4ms/step - loss: 2.0176 - accuracy: 0.3391 - val_loss: 2.4408 - val_accuracy: 0.2567 Epoch 32/50 448/448 [==============================] - 2s 4ms/step - loss: 2.0397 - accuracy: 0.3308 - val_loss: 2.4731 - val_accuracy: 0.2373 Epoch 33/50 448/448 [==============================] - 2s 4ms/step - loss: 2.0282 - accuracy: 0.3368 - val_loss: 2.3609 - val_accuracy: 0.2617 Epoch 34/50 448/448 [==============================] - 2s 4ms/step - loss: 2.0328 - accuracy: 0.3368 - val_loss: 2.4579 - val_accuracy: 0.2457 Epoch 35/50 448/448 [==============================] - 2s 4ms/step - loss: 2.0234 - accuracy: 0.3338 - val_loss: 2.4946 - val_accuracy: 0.2423 Epoch 36/50 448/448 [==============================] - 2s 4ms/step - loss: 1.9779 - accuracy: 0.3529 - val_loss: 2.4201 - val_accuracy: 0.2707 Epoch 37/50 448/448 [==============================] - 2s 4ms/step - loss: 2.0264 - accuracy: 0.3397 - val_loss: 2.4735 - val_accuracy: 0.2490 Epoch 38/50 448/448 [==============================] - 2s 4ms/step - loss: 2.0457 - accuracy: 0.3323 - val_loss: 2.4227 - val_accuracy: 0.2543 Epoch 39/50 448/448 [==============================] - 2s 4ms/step - loss: 2.0287 - accuracy: 0.3424 - val_loss: 2.4930 - val_accuracy: 0.2307 Epoch 40/50 448/448 [==============================] - 2s 4ms/step - loss: 1.9641 - accuracy: 0.3548 - val_loss: 2.4281 - val_accuracy: 0.2597 Epoch 41/50 448/448 [==============================] - 2s 4ms/step - loss: 1.9655 - accuracy: 0.3558 - val_loss: 2.5000 - val_accuracy: 0.2523 Epoch 42/50 448/448 [==============================] - 2s 4ms/step - loss: 1.9422 - accuracy: 0.3638 - val_loss: 2.5377 - val_accuracy: 0.2433 Epoch 43/50 448/448 [==============================] - 2s 4ms/step - loss: 1.9896 - accuracy: 0.3503 - val_loss: 2.4491 - val_accuracy: 0.2683 Epoch 44/50 448/448 [==============================] - 2s 4ms/step - loss: 1.9459 - accuracy: 0.3626 - val_loss: 2.4475 - val_accuracy: 0.2633 Epoch 45/50 448/448 [==============================] - 2s 4ms/step - loss: 1.9369 - accuracy: 0.3713 - val_loss: 2.4788 - val_accuracy: 0.2500 Epoch 46/50 448/448 [==============================] - 2s 4ms/step - loss: 1.9345 - accuracy: 0.3652 - val_loss: 2.4601 - val_accuracy: 0.2587
EXTRACT THE TRAINING HISTORY OF THE BASE MODEL INTO A DICTIONARY
.history to extract the training data from the base model.pd.concat() to display the results.baseModelHistory128 = baseModelHistory128.history
# Get model results
result_df = compile_results(baseModelHistory128, "BaselineModel128", 32, result_df)
display(result_df.iloc[1])
Model Name BaselineModel128 Epochs 46 Batch Size 32 Train Loss 1.97794 Val Loss 2.420122 Train Acc 0.352949 Val Acc 0.270667 [Train - Val] Acc 0.082283 Name: 1, dtype: object
PLOTTING THE LOSS AND ACCURACY CURVE
From the curves, similar to images with 31 px, we can see that images with 128px tend to overfit less than images with 31px as the difference between training and validation accuracy is slightly lower.
However, our model still overfits the input data with unstable changes in the validation loss and validation accuracy, indicating that more improvements will definitely be needed.
plot_loss_curve(baseModelHistory128)
plt.show()
We will now use our augmented data to train and fit the model and compare the changes in performance to evaluate if augmenting the image data has any impact on the model's performance and how well it can generalize to unseen data.
For augmented data, we will still be testing with both image sizes and evaluate which type of image can perform better using the same model parameters as unaugmented data.
IMAGE SIZE : 31 X 31 PX
baseline = build_baseline_model_31(X_train_31_aug, NUM_CLASS, model_name="Baseline31Augmented")
Model: "Baseline31Augmented"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 31, 31, 1)] 0
flatten (Flatten) (None, 961) 0
dense (Dense) (None, 128) 123136
dense_1 (Dense) (None, 128) 16512
dense_2 (Dense) (None, 128) 16512
dense_3 (Dense) (None, 15) 1935
=================================================================
Total params: 158,095
Trainable params: 158,095
Non-trainable params: 0
_________________________________________________________________
FITTING THE DATA TO THE BASELINE MODEL
Fit the data using model.fit. Apply early stopping to the model as well to try and prevent overfitting of data.
baseModelAugmentedHistory31 = baseline.fit(X_train_31_aug, y_train_31, epochs=50, validation_data=(X_val_31, y_val_31), batch_size=32, callbacks=EarlyStopping(monitor='val_accuracy', patience=10, restore_best_weights=True))
Epoch 1/50 448/448 [==============================] - 2s 4ms/step - loss: 2.5870 - accuracy: 0.1361 - val_loss: 2.4654 - val_accuracy: 0.2017 Epoch 2/50 448/448 [==============================] - 2s 4ms/step - loss: 2.4310 - accuracy: 0.1997 - val_loss: 2.3457 - val_accuracy: 0.2357 Epoch 3/50 448/448 [==============================] - 2s 4ms/step - loss: 2.3350 - accuracy: 0.2325 - val_loss: 2.1870 - val_accuracy: 0.2860 Epoch 4/50 448/448 [==============================] - 2s 4ms/step - loss: 2.2306 - accuracy: 0.2751 - val_loss: 2.1534 - val_accuracy: 0.3113 Epoch 5/50 448/448 [==============================] - 2s 4ms/step - loss: 2.1351 - accuracy: 0.3101 - val_loss: 2.0933 - val_accuracy: 0.3220 Epoch 6/50 448/448 [==============================] - 2s 4ms/step - loss: 2.0591 - accuracy: 0.3292 - val_loss: 2.0673 - val_accuracy: 0.3400 Epoch 7/50 448/448 [==============================] - 2s 4ms/step - loss: 1.9609 - accuracy: 0.3629 - val_loss: 1.9944 - val_accuracy: 0.3723 Epoch 8/50 448/448 [==============================] - 2s 4ms/step - loss: 1.9049 - accuracy: 0.3828 - val_loss: 1.8406 - val_accuracy: 0.4320 Epoch 9/50 448/448 [==============================] - 2s 4ms/step - loss: 1.8128 - accuracy: 0.4071 - val_loss: 1.8121 - val_accuracy: 0.4350 Epoch 10/50 448/448 [==============================] - 2s 4ms/step - loss: 1.7264 - accuracy: 0.4364 - val_loss: 1.8731 - val_accuracy: 0.4213 Epoch 11/50 448/448 [==============================] - 2s 4ms/step - loss: 1.6818 - accuracy: 0.4508 - val_loss: 1.8735 - val_accuracy: 0.4290 Epoch 12/50 448/448 [==============================] - 2s 4ms/step - loss: 1.6079 - accuracy: 0.4785 - val_loss: 1.7812 - val_accuracy: 0.4617 Epoch 13/50 448/448 [==============================] - 2s 4ms/step - loss: 1.5379 - accuracy: 0.5003 - val_loss: 1.7425 - val_accuracy: 0.4597 Epoch 14/50 448/448 [==============================] - 2s 4ms/step - loss: 1.4735 - accuracy: 0.5170 - val_loss: 1.8322 - val_accuracy: 0.4533 Epoch 15/50 448/448 [==============================] - 2s 4ms/step - loss: 1.4325 - accuracy: 0.5318 - val_loss: 1.8564 - val_accuracy: 0.4680 Epoch 16/50 448/448 [==============================] - 2s 4ms/step - loss: 1.3680 - accuracy: 0.5523 - val_loss: 1.7472 - val_accuracy: 0.4763 Epoch 17/50 448/448 [==============================] - 2s 4ms/step - loss: 1.3053 - accuracy: 0.5714 - val_loss: 1.8108 - val_accuracy: 0.4737 Epoch 18/50 448/448 [==============================] - 2s 4ms/step - loss: 1.2480 - accuracy: 0.5972 - val_loss: 1.7609 - val_accuracy: 0.4913 Epoch 19/50 448/448 [==============================] - 2s 4ms/step - loss: 1.2416 - accuracy: 0.5980 - val_loss: 1.9371 - val_accuracy: 0.4750 Epoch 20/50 448/448 [==============================] - 2s 4ms/step - loss: 1.1863 - accuracy: 0.6074 - val_loss: 1.9436 - val_accuracy: 0.4573 Epoch 21/50 448/448 [==============================] - 2s 4ms/step - loss: 1.1267 - accuracy: 0.6331 - val_loss: 1.8527 - val_accuracy: 0.5000 Epoch 22/50 448/448 [==============================] - 2s 4ms/step - loss: 1.0667 - accuracy: 0.6548 - val_loss: 1.8368 - val_accuracy: 0.5043 Epoch 23/50 448/448 [==============================] - 2s 4ms/step - loss: 1.0678 - accuracy: 0.6501 - val_loss: 1.9162 - val_accuracy: 0.4983 Epoch 24/50 448/448 [==============================] - 2s 4ms/step - loss: 1.0452 - accuracy: 0.6607 - val_loss: 2.1570 - val_accuracy: 0.4830 Epoch 25/50 448/448 [==============================] - 2s 4ms/step - loss: 0.9902 - accuracy: 0.6759 - val_loss: 1.9156 - val_accuracy: 0.5227 Epoch 26/50 448/448 [==============================] - 2s 4ms/step - loss: 0.9193 - accuracy: 0.7000 - val_loss: 1.9075 - val_accuracy: 0.5020 Epoch 27/50 448/448 [==============================] - 2s 4ms/step - loss: 0.9154 - accuracy: 0.6971 - val_loss: 2.1025 - val_accuracy: 0.4537 Epoch 28/50 448/448 [==============================] - 2s 4ms/step - loss: 0.8791 - accuracy: 0.7089 - val_loss: 2.1849 - val_accuracy: 0.5100 Epoch 29/50 448/448 [==============================] - 2s 4ms/step - loss: 0.8682 - accuracy: 0.7112 - val_loss: 2.1186 - val_accuracy: 0.4933 Epoch 30/50 448/448 [==============================] - 2s 4ms/step - loss: 0.8420 - accuracy: 0.7220 - val_loss: 2.1609 - val_accuracy: 0.4927 Epoch 31/50 448/448 [==============================] - 2s 4ms/step - loss: 0.8163 - accuracy: 0.7267 - val_loss: 2.0984 - val_accuracy: 0.5183 Epoch 32/50 448/448 [==============================] - 2s 4ms/step - loss: 0.7826 - accuracy: 0.7371 - val_loss: 2.1134 - val_accuracy: 0.5253 Epoch 33/50 448/448 [==============================] - 2s 4ms/step - loss: 0.7369 - accuracy: 0.7536 - val_loss: 2.2065 - val_accuracy: 0.5053 Epoch 34/50 448/448 [==============================] - 2s 4ms/step - loss: 0.7432 - accuracy: 0.7507 - val_loss: 2.0888 - val_accuracy: 0.5177 Epoch 35/50 448/448 [==============================] - 2s 4ms/step - loss: 0.7113 - accuracy: 0.7617 - val_loss: 2.2200 - val_accuracy: 0.5350 Epoch 36/50 448/448 [==============================] - 2s 4ms/step - loss: 0.6627 - accuracy: 0.7798 - val_loss: 2.2455 - val_accuracy: 0.5137 Epoch 37/50 448/448 [==============================] - 2s 4ms/step - loss: 0.6594 - accuracy: 0.7848 - val_loss: 2.3177 - val_accuracy: 0.5310 Epoch 38/50 448/448 [==============================] - 2s 4ms/step - loss: 0.6608 - accuracy: 0.7786 - val_loss: 2.1313 - val_accuracy: 0.5387 Epoch 39/50 448/448 [==============================] - 2s 4ms/step - loss: 0.6153 - accuracy: 0.7921 - val_loss: 2.3702 - val_accuracy: 0.5183 Epoch 40/50 448/448 [==============================] - 2s 4ms/step - loss: 0.5942 - accuracy: 0.8038 - val_loss: 2.3131 - val_accuracy: 0.5263 Epoch 41/50 448/448 [==============================] - 2s 4ms/step - loss: 0.5942 - accuracy: 0.8020 - val_loss: 2.6336 - val_accuracy: 0.5260 Epoch 42/50 448/448 [==============================] - 2s 4ms/step - loss: 0.5815 - accuracy: 0.8103 - val_loss: 2.2884 - val_accuracy: 0.5010 Epoch 43/50 448/448 [==============================] - 2s 4ms/step - loss: 0.5551 - accuracy: 0.8174 - val_loss: 2.6140 - val_accuracy: 0.5170 Epoch 44/50 448/448 [==============================] - 2s 4ms/step - loss: 0.5487 - accuracy: 0.8167 - val_loss: 2.9216 - val_accuracy: 0.5117 Epoch 45/50 448/448 [==============================] - 2s 4ms/step - loss: 0.5483 - accuracy: 0.8167 - val_loss: 2.9111 - val_accuracy: 0.5370 Epoch 46/50 448/448 [==============================] - 2s 4ms/step - loss: 0.5496 - accuracy: 0.8200 - val_loss: 2.7802 - val_accuracy: 0.5350 Epoch 47/50 448/448 [==============================] - 2s 3ms/step - loss: 0.5702 - accuracy: 0.8145 - val_loss: 2.9488 - val_accuracy: 0.5013 Epoch 48/50 448/448 [==============================] - 2s 4ms/step - loss: 0.5402 - accuracy: 0.8214 - val_loss: 2.6543 - val_accuracy: 0.5440 Epoch 49/50 448/448 [==============================] - 2s 4ms/step - loss: 0.5378 - accuracy: 0.8250 - val_loss: 2.7626 - val_accuracy: 0.5170 Epoch 50/50 448/448 [==============================] - 2s 4ms/step - loss: 0.4990 - accuracy: 0.8364 - val_loss: 2.6535 - val_accuracy: 0.5310
EXTRACT THE TRAINING HISTORY OF THE BASE MODEL INTO A DICTIONARY
.history to extract the training data from the base model.pd.concat() to display the results.baseModelAugmentedHistory31 = baseModelAugmentedHistory31.history
# Get model results
result_df = compile_results(baseModelAugmentedHistory31, "BaselineModel31Augmented", 32, result_df)
display(result_df.iloc[2])
Model Name BaselineModel31Augmented Epochs 50 Batch Size 32 Train Loss 0.540158 Val Loss 2.654329 Train Acc 0.821431 Val Acc 0.544 [Train - Val] Acc 0.277431 Name: 2, dtype: object
PLOTTING THE LOSS AND ACCURACY CURVE
plot_loss_curve(baseModelAugmentedHistory31)
plt.show()
IMAGE SIZE : 128 X 128 PX
baseline = build_baseline_model_128(X_train_128_aug, NUM_CLASS, model_name="Baseline128Augmented")
Model: "Baseline128Augmented"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 128, 128, 1)] 0
flatten (Flatten) (None, 16384) 0
dense (Dense) (None, 256) 4194560
dense_1 (Dense) (None, 256) 65792
dense_2 (Dense) (None, 256) 65792
dense_3 (Dense) (None, 15) 3855
=================================================================
Total params: 4,329,999
Trainable params: 4,329,999
Non-trainable params: 0
_________________________________________________________________
FITTING THE DATA TO THE BASELINE MODEL
Fit the data using model.fit. Apply early stopping to the model as well to try and prevent overfitting of data.
baseModelAugmentedHistory128 = baseline.fit(X_train_128_aug, y_train_128, epochs=50, validation_data=(X_val_128, y_val_128), batch_size=32, callbacks=EarlyStopping(monitor='val_accuracy', patience=10, restore_best_weights=True))
Epoch 1/50 448/448 [==============================] - 3s 5ms/step - loss: 2.5936 - accuracy: 0.1313 - val_loss: 2.5058 - val_accuracy: 0.1573 Epoch 2/50 448/448 [==============================] - 2s 5ms/step - loss: 2.4707 - accuracy: 0.1754 - val_loss: 2.4489 - val_accuracy: 0.1857 Epoch 3/50 448/448 [==============================] - 2s 5ms/step - loss: 2.4185 - accuracy: 0.1911 - val_loss: 2.3931 - val_accuracy: 0.2007 Epoch 4/50 448/448 [==============================] - 2s 4ms/step - loss: 2.3796 - accuracy: 0.2081 - val_loss: 2.4200 - val_accuracy: 0.1933 Epoch 5/50 448/448 [==============================] - 2s 5ms/step - loss: 2.3449 - accuracy: 0.2203 - val_loss: 2.3280 - val_accuracy: 0.2277 Epoch 6/50 448/448 [==============================] - 2s 4ms/step - loss: 2.3208 - accuracy: 0.2267 - val_loss: 2.4222 - val_accuracy: 0.2030 Epoch 7/50 448/448 [==============================] - 2s 4ms/step - loss: 2.2722 - accuracy: 0.2465 - val_loss: 2.3249 - val_accuracy: 0.2337 Epoch 8/50 448/448 [==============================] - 2s 4ms/step - loss: 2.2204 - accuracy: 0.2666 - val_loss: 2.2720 - val_accuracy: 0.2703 Epoch 9/50 448/448 [==============================] - 2s 4ms/step - loss: 2.1766 - accuracy: 0.2801 - val_loss: 2.2725 - val_accuracy: 0.2730 Epoch 10/50 448/448 [==============================] - 2s 4ms/step - loss: 2.1389 - accuracy: 0.2940 - val_loss: 2.2472 - val_accuracy: 0.2667 Epoch 11/50 448/448 [==============================] - 2s 4ms/step - loss: 2.0968 - accuracy: 0.3097 - val_loss: 2.4643 - val_accuracy: 0.2240 Epoch 12/50 448/448 [==============================] - 2s 4ms/step - loss: 2.0579 - accuracy: 0.3186 - val_loss: 2.1516 - val_accuracy: 0.2987 Epoch 13/50 448/448 [==============================] - 2s 4ms/step - loss: 1.9917 - accuracy: 0.3391 - val_loss: 2.1244 - val_accuracy: 0.3227 Epoch 14/50 448/448 [==============================] - 2s 4ms/step - loss: 1.9691 - accuracy: 0.3481 - val_loss: 2.1972 - val_accuracy: 0.2893 Epoch 15/50 448/448 [==============================] - 2s 4ms/step - loss: 1.9570 - accuracy: 0.3564 - val_loss: 2.2839 - val_accuracy: 0.2907 Epoch 16/50 448/448 [==============================] - 2s 4ms/step - loss: 1.9332 - accuracy: 0.3682 - val_loss: 2.1645 - val_accuracy: 0.3157 Epoch 17/50 448/448 [==============================] - 2s 4ms/step - loss: 1.8639 - accuracy: 0.3903 - val_loss: 2.1832 - val_accuracy: 0.3297 Epoch 18/50 448/448 [==============================] - 2s 5ms/step - loss: 1.8520 - accuracy: 0.3937 - val_loss: 2.1116 - val_accuracy: 0.3457 Epoch 19/50 448/448 [==============================] - 2s 4ms/step - loss: 1.8435 - accuracy: 0.3971 - val_loss: 2.1942 - val_accuracy: 0.3200 Epoch 20/50 448/448 [==============================] - 2s 5ms/step - loss: 1.7854 - accuracy: 0.4202 - val_loss: 2.1232 - val_accuracy: 0.3613 Epoch 21/50 448/448 [==============================] - 2s 5ms/step - loss: 1.7837 - accuracy: 0.4121 - val_loss: 2.1869 - val_accuracy: 0.3163 Epoch 22/50 448/448 [==============================] - 2s 5ms/step - loss: 1.7517 - accuracy: 0.4301 - val_loss: 2.2765 - val_accuracy: 0.3147 Epoch 23/50 448/448 [==============================] - 2s 5ms/step - loss: 1.7158 - accuracy: 0.4436 - val_loss: 2.0795 - val_accuracy: 0.3823 Epoch 24/50 448/448 [==============================] - 2s 5ms/step - loss: 1.6604 - accuracy: 0.4538 - val_loss: 2.2947 - val_accuracy: 0.3417 Epoch 25/50 448/448 [==============================] - 2s 5ms/step - loss: 1.6448 - accuracy: 0.4577 - val_loss: 2.1969 - val_accuracy: 0.3503 Epoch 26/50 448/448 [==============================] - 2s 5ms/step - loss: 1.6205 - accuracy: 0.4695 - val_loss: 2.3035 - val_accuracy: 0.3450 Epoch 27/50 448/448 [==============================] - 2s 5ms/step - loss: 1.5993 - accuracy: 0.4732 - val_loss: 2.0573 - val_accuracy: 0.3773 Epoch 28/50 448/448 [==============================] - 2s 5ms/step - loss: 1.5848 - accuracy: 0.4806 - val_loss: 2.1406 - val_accuracy: 0.3880 Epoch 29/50 448/448 [==============================] - 2s 5ms/step - loss: 1.5350 - accuracy: 0.4949 - val_loss: 2.1790 - val_accuracy: 0.3953 Epoch 30/50 448/448 [==============================] - 2s 5ms/step - loss: 1.4787 - accuracy: 0.5136 - val_loss: 2.1173 - val_accuracy: 0.3853 Epoch 31/50 448/448 [==============================] - 2s 5ms/step - loss: 1.4711 - accuracy: 0.5190 - val_loss: 2.3489 - val_accuracy: 0.3767 Epoch 32/50 448/448 [==============================] - 2s 5ms/step - loss: 1.4901 - accuracy: 0.5179 - val_loss: 2.2273 - val_accuracy: 0.3720 Epoch 33/50 448/448 [==============================] - 2s 5ms/step - loss: 1.4266 - accuracy: 0.5357 - val_loss: 2.1466 - val_accuracy: 0.4087 Epoch 34/50 448/448 [==============================] - 2s 5ms/step - loss: 1.4364 - accuracy: 0.5325 - val_loss: 2.2270 - val_accuracy: 0.3817 Epoch 35/50 448/448 [==============================] - 2s 5ms/step - loss: 1.4057 - accuracy: 0.5453 - val_loss: 2.5665 - val_accuracy: 0.3530 Epoch 36/50 448/448 [==============================] - 2s 5ms/step - loss: 1.4256 - accuracy: 0.5384 - val_loss: 2.1790 - val_accuracy: 0.4213 Epoch 37/50 448/448 [==============================] - 2s 5ms/step - loss: 1.3463 - accuracy: 0.5651 - val_loss: 2.2845 - val_accuracy: 0.4000 Epoch 38/50 448/448 [==============================] - 2s 5ms/step - loss: 1.3704 - accuracy: 0.5562 - val_loss: 2.1563 - val_accuracy: 0.4313 Epoch 39/50 448/448 [==============================] - 2s 5ms/step - loss: 1.3485 - accuracy: 0.5608 - val_loss: 2.2128 - val_accuracy: 0.4257 Epoch 40/50 448/448 [==============================] - 2s 5ms/step - loss: 1.2841 - accuracy: 0.5825 - val_loss: 2.1177 - val_accuracy: 0.4383 Epoch 41/50 448/448 [==============================] - 2s 5ms/step - loss: 1.2841 - accuracy: 0.5815 - val_loss: 2.2555 - val_accuracy: 0.4073 Epoch 42/50 448/448 [==============================] - 2s 5ms/step - loss: 1.2232 - accuracy: 0.6029 - val_loss: 2.3711 - val_accuracy: 0.4070 Epoch 43/50 448/448 [==============================] - 2s 5ms/step - loss: 1.2331 - accuracy: 0.6029 - val_loss: 2.1841 - val_accuracy: 0.4387 Epoch 44/50 448/448 [==============================] - 2s 5ms/step - loss: 1.1990 - accuracy: 0.6149 - val_loss: 2.4859 - val_accuracy: 0.4117 Epoch 45/50 448/448 [==============================] - 2s 4ms/step - loss: 1.2088 - accuracy: 0.6108 - val_loss: 2.3769 - val_accuracy: 0.4150 Epoch 46/50 448/448 [==============================] - 2s 4ms/step - loss: 1.1899 - accuracy: 0.6158 - val_loss: 2.3144 - val_accuracy: 0.4230 Epoch 47/50 448/448 [==============================] - 2s 4ms/step - loss: 1.2064 - accuracy: 0.6110 - val_loss: 2.2087 - val_accuracy: 0.4210 Epoch 48/50 448/448 [==============================] - 2s 5ms/step - loss: 1.1933 - accuracy: 0.6182 - val_loss: 2.2959 - val_accuracy: 0.4310 Epoch 49/50 448/448 [==============================] - 2s 4ms/step - loss: 1.2323 - accuracy: 0.6022 - val_loss: 2.3658 - val_accuracy: 0.4287 Epoch 50/50 448/448 [==============================] - 2s 5ms/step - loss: 1.1078 - accuracy: 0.6404 - val_loss: 2.3628 - val_accuracy: 0.4390
EXTRACT THE TRAINING HISTORY OF THE BASE MODEL INTO A DICTIONARY
.history to extract the training data from the base model.pd.concat() to display the results.baseModelAugmentedHistory128 = baseModelAugmentedHistory128.history
# Get model results
result_df = compile_results(baseModelAugmentedHistory128, "BaselineModel128Augmented", 32, result_df)
display(result_df.iloc[3])
Model Name BaselineModel128Augmented Epochs 50 Batch Size 32 Train Loss 1.107781 Val Loss 2.362754 Train Acc 0.640419 Val Acc 0.439 [Train - Val] Acc 0.201419 Name: 3, dtype: object
PLOTTING THE LOSS AND ACCURACY CURVE
plot_loss_curve(baseModelAugmentedHistory128)
plt.show()
From the baseline model for both image sizes, it is clear that the model tends to overfit on the training data as it performs more poorly on the validation dataset with a lower validation accuracy and higher validation loss.
Therefore, to combat this overfit, we will be experimenting with L1 / L2 regularization in subsequent models to help reduce the model's probability of overfitting and ensure better results.
Image Size with Higher Validation Accuracy & Lower Loss : Baseline Model for 128 px Images + Augmented
Augmented Vs Non-Augmented Data : The distinction is still not clear for the baseline model, as the augmented data performed better for images with 128 px, but worse for images with 31 px. Hence, we will continue exploring other models to determine whether augmentation is truly helpful.
For image classification, the first model to test would be a simple Conv2D CNN Model.
build_cnn_model function for testing of the Conv2D Neural Network Model on 31 px and 128 px images.# Function for a Conv2D Model
def build_cnn_model_31(X_train, NUM_CLASS, model_name):
# Clear the previous tensorflow session
tf.keras.backend.clear_session()
# Create the input tensor
input_shape = (X_train.shape[1], X_train.shape[2], X_train.shape[3])
inputs = Input(shape=input_shape)
# Building the model
x = Conv2D(32, (3, 3), activation='relu', padding='same')(inputs)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Conv2D(64, (3, 3), activation='relu', padding='same')(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Conv2D(128, (3, 3), activation='relu', padding='same')(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Dropout(0.3)(x)
x = Flatten()(x)
x = Dense(256, activation='relu')(x)
x = Dropout(0.3)(x)
x = Dense(NUM_CLASS, activation='softmax')(x) # Output Layer
# Creating the model
model = Model(inputs=inputs, outputs=x, name=model_name)
# Compiling the model
model.compile(optimizer=SGD(learning_rate=0.01, momentum=0.9), loss='categorical_crossentropy', metrics=['accuracy'])
# Print model summary
model.summary()
return model
# Function for a Conv2D Model
def build_cnn_model_128(X_train, NUM_CLASS, model_name):
# Clear the previous tensorflow session
tf.keras.backend.clear_session()
# Create the input tensor
input_shape = (X_train.shape[1], X_train.shape[2], X_train.shape[3])
inputs = Input(shape=input_shape)
# Building the model
x = Conv2D(32, (3, 3), activation='relu', padding='same')(inputs)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Conv2D(64, (3, 3), activation='relu', padding='same')(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Conv2D(128, (3, 3), activation='relu', padding='same')(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Conv2D(256, (3, 3), activation='relu', padding='same')(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Dropout(0.5)(x)
x = Flatten()(x)
x = Dense(512, activation='relu')(x)
x = Dropout(0.5)(x)
x = Dense(NUM_CLASS, activation='softmax')(x) # Output Layer
# Creating the model
model = Model(inputs=inputs, outputs=x, name=model_name)
# Compiling the model
model.compile(optimizer=SGD(learning_rate=0.01, momentum=0.9), loss='categorical_crossentropy', metrics=['accuracy'])
# Print model summary
model.summary()
return model
To train the Conv2D model, we will first use our unaugmented data to train and fit the model. After that, we will retrain the same model with augmented data and compare the differences in performance.
For this model, we will be testing with both image sizes and evaluate which type of image can perform better using the same model parameters.
IMAGE SIZE : 31 X 31 PX
conv2DModel_31 = build_cnn_model_31(X_train_31, NUM_CLASS, model_name="Conv2D_31")
Model: "Conv2D_31"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 31, 31, 1)] 0
conv2d (Conv2D) (None, 31, 31, 32) 320
max_pooling2d (MaxPooling2D (None, 15, 15, 32) 0
)
conv2d_1 (Conv2D) (None, 15, 15, 64) 18496
max_pooling2d_1 (MaxPooling (None, 7, 7, 64) 0
2D)
conv2d_2 (Conv2D) (None, 7, 7, 128) 73856
max_pooling2d_2 (MaxPooling (None, 3, 3, 128) 0
2D)
dropout (Dropout) (None, 3, 3, 128) 0
flatten (Flatten) (None, 1152) 0
dense (Dense) (None, 256) 295168
dropout_1 (Dropout) (None, 256) 0
dense_1 (Dense) (None, 15) 3855
=================================================================
Total params: 391,695
Trainable params: 391,695
Non-trainable params: 0
_________________________________________________________________
FITTING THE DATA TO THE CONV2D MODEL
Fit the data using model.fit. Apply early stopping to the model as well to try and prevent overfitting of data.
conv2D_31History = conv2DModel_31.fit(X_train_31, y_train_31, epochs=50, validation_data=(X_val_31, y_val_31), batch_size=32, callbacks=EarlyStopping(monitor='val_accuracy', patience=10, restore_best_weights=True))
Epoch 1/50 448/448 [==============================] - 6s 6ms/step - loss: 2.6153 - accuracy: 0.1280 - val_loss: 2.3218 - val_accuracy: 0.2913 Epoch 2/50 448/448 [==============================] - 2s 5ms/step - loss: 2.0299 - accuracy: 0.3465 - val_loss: 1.7434 - val_accuracy: 0.4403 Epoch 3/50 448/448 [==============================] - 2s 5ms/step - loss: 1.4601 - accuracy: 0.5275 - val_loss: 1.2240 - val_accuracy: 0.6103 Epoch 4/50 448/448 [==============================] - 2s 5ms/step - loss: 1.0700 - accuracy: 0.6591 - val_loss: 0.8920 - val_accuracy: 0.7153 Epoch 5/50 448/448 [==============================] - 2s 5ms/step - loss: 0.7853 - accuracy: 0.7490 - val_loss: 0.8846 - val_accuracy: 0.7137 Epoch 6/50 448/448 [==============================] - 2s 5ms/step - loss: 0.6284 - accuracy: 0.7990 - val_loss: 0.5484 - val_accuracy: 0.8310 Epoch 7/50 448/448 [==============================] - 2s 5ms/step - loss: 0.4979 - accuracy: 0.8391 - val_loss: 0.4860 - val_accuracy: 0.8567 Epoch 8/50 448/448 [==============================] - 2s 5ms/step - loss: 0.4091 - accuracy: 0.8667 - val_loss: 0.4935 - val_accuracy: 0.8577 Epoch 9/50 448/448 [==============================] - 2s 5ms/step - loss: 0.3509 - accuracy: 0.8824 - val_loss: 0.4661 - val_accuracy: 0.8623 Epoch 10/50 448/448 [==============================] - 2s 5ms/step - loss: 0.2881 - accuracy: 0.9045 - val_loss: 0.4437 - val_accuracy: 0.8697 Epoch 11/50 448/448 [==============================] - 2s 5ms/step - loss: 0.2624 - accuracy: 0.9136 - val_loss: 0.3922 - val_accuracy: 0.8887 Epoch 12/50 448/448 [==============================] - 2s 5ms/step - loss: 0.2213 - accuracy: 0.9285 - val_loss: 0.4276 - val_accuracy: 0.8753 Epoch 13/50 448/448 [==============================] - 2s 5ms/step - loss: 0.2172 - accuracy: 0.9284 - val_loss: 0.3641 - val_accuracy: 0.8940 Epoch 14/50 448/448 [==============================] - 2s 5ms/step - loss: 0.1999 - accuracy: 0.9326 - val_loss: 0.3415 - val_accuracy: 0.9070 Epoch 15/50 448/448 [==============================] - 2s 5ms/step - loss: 0.1747 - accuracy: 0.9419 - val_loss: 0.3888 - val_accuracy: 0.8973 Epoch 16/50 448/448 [==============================] - 2s 5ms/step - loss: 0.1536 - accuracy: 0.9478 - val_loss: 0.3520 - val_accuracy: 0.9067 Epoch 17/50 448/448 [==============================] - 2s 5ms/step - loss: 0.1342 - accuracy: 0.9556 - val_loss: 0.3294 - val_accuracy: 0.9177 Epoch 18/50 448/448 [==============================] - 2s 5ms/step - loss: 0.1171 - accuracy: 0.9608 - val_loss: 0.3418 - val_accuracy: 0.9103 Epoch 19/50 448/448 [==============================] - 2s 5ms/step - loss: 0.1285 - accuracy: 0.9569 - val_loss: 0.3500 - val_accuracy: 0.9083 Epoch 20/50 448/448 [==============================] - 2s 5ms/step - loss: 0.1123 - accuracy: 0.9629 - val_loss: 0.3324 - val_accuracy: 0.9160 Epoch 21/50 448/448 [==============================] - 2s 5ms/step - loss: 0.1107 - accuracy: 0.9642 - val_loss: 0.3977 - val_accuracy: 0.9047 Epoch 22/50 448/448 [==============================] - 2s 6ms/step - loss: 0.1095 - accuracy: 0.9646 - val_loss: 0.3528 - val_accuracy: 0.9137 Epoch 23/50 448/448 [==============================] - 2s 5ms/step - loss: 0.0921 - accuracy: 0.9702 - val_loss: 0.3737 - val_accuracy: 0.9067 Epoch 24/50 448/448 [==============================] - 2s 5ms/step - loss: 0.0927 - accuracy: 0.9701 - val_loss: 0.3673 - val_accuracy: 0.9080 Epoch 25/50 448/448 [==============================] - 2s 5ms/step - loss: 0.0957 - accuracy: 0.9673 - val_loss: 0.3307 - val_accuracy: 0.9163 Epoch 26/50 448/448 [==============================] - 2s 5ms/step - loss: 0.0765 - accuracy: 0.9750 - val_loss: 0.3816 - val_accuracy: 0.9197 Epoch 27/50 448/448 [==============================] - 2s 5ms/step - loss: 0.0879 - accuracy: 0.9708 - val_loss: 0.3541 - val_accuracy: 0.9210 Epoch 28/50 448/448 [==============================] - 2s 5ms/step - loss: 0.0717 - accuracy: 0.9763 - val_loss: 0.3622 - val_accuracy: 0.9140 Epoch 29/50 448/448 [==============================] - 2s 5ms/step - loss: 0.0718 - accuracy: 0.9755 - val_loss: 0.4004 - val_accuracy: 0.9110 Epoch 30/50 448/448 [==============================] - 2s 5ms/step - loss: 0.0645 - accuracy: 0.9787 - val_loss: 0.3442 - val_accuracy: 0.9227 Epoch 31/50 448/448 [==============================] - 2s 5ms/step - loss: 0.0669 - accuracy: 0.9768 - val_loss: 0.3703 - val_accuracy: 0.9183 Epoch 32/50 448/448 [==============================] - 2s 5ms/step - loss: 0.0747 - accuracy: 0.9758 - val_loss: 0.3526 - val_accuracy: 0.9267 Epoch 33/50 448/448 [==============================] - 2s 5ms/step - loss: 0.0678 - accuracy: 0.9770 - val_loss: 0.3504 - val_accuracy: 0.9187 Epoch 34/50 448/448 [==============================] - 2s 5ms/step - loss: 0.0651 - accuracy: 0.9782 - val_loss: 0.4153 - val_accuracy: 0.9117 Epoch 35/50 448/448 [==============================] - 2s 5ms/step - loss: 0.0579 - accuracy: 0.9814 - val_loss: 0.3950 - val_accuracy: 0.9163 Epoch 36/50 448/448 [==============================] - 2s 5ms/step - loss: 0.0755 - accuracy: 0.9770 - val_loss: 0.3593 - val_accuracy: 0.9190 Epoch 37/50 448/448 [==============================] - 2s 5ms/step - loss: 0.0499 - accuracy: 0.9831 - val_loss: 0.3584 - val_accuracy: 0.9213 Epoch 38/50 448/448 [==============================] - 2s 5ms/step - loss: 0.0557 - accuracy: 0.9824 - val_loss: 0.4012 - val_accuracy: 0.9147 Epoch 39/50 448/448 [==============================] - 2s 5ms/step - loss: 0.0593 - accuracy: 0.9812 - val_loss: 0.3541 - val_accuracy: 0.9243 Epoch 40/50 448/448 [==============================] - 2s 5ms/step - loss: 0.0553 - accuracy: 0.9828 - val_loss: 0.3791 - val_accuracy: 0.9250 Epoch 41/50 448/448 [==============================] - 2s 5ms/step - loss: 0.0530 - accuracy: 0.9823 - val_loss: 0.3897 - val_accuracy: 0.9203 Epoch 42/50 448/448 [==============================] - 2s 5ms/step - loss: 0.0402 - accuracy: 0.9859 - val_loss: 0.3757 - val_accuracy: 0.9197
EXTRACT THE TRAINING HISTORY OF THE CONV2D MODEL INTO A DICTIONARY
.history to extract the training data from the CONV2D model.pd.concat() to display the results.conv2D_31History = conv2D_31History.history
# Get model results
result_df = compile_results(conv2D_31History, "Conv2DModel_31", 32, result_df)
display(result_df.iloc[4])
Model Name Conv2DModel_31 Epochs 42 Batch Size 32 Train Loss 0.07472 Val Loss 0.352616 Train Acc 0.975777 Val Acc 0.926667 [Train - Val] Acc 0.04911 Name: 4, dtype: object
PLOTTING THE LOSS AND ACCURACY CURVE
plot_loss_curve(conv2D_31History)
plt.show()
IMAGE SIZE : 128 X 128 PX
conv2DModel_128 = build_cnn_model_128(X_train_128, NUM_CLASS, model_name="Conv2D_128")
Model: "Conv2D_128"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 128, 128, 1)] 0
conv2d (Conv2D) (None, 128, 128, 32) 320
max_pooling2d (MaxPooling2D (None, 64, 64, 32) 0
)
conv2d_1 (Conv2D) (None, 64, 64, 64) 18496
max_pooling2d_1 (MaxPooling (None, 32, 32, 64) 0
2D)
conv2d_2 (Conv2D) (None, 32, 32, 128) 73856
max_pooling2d_2 (MaxPooling (None, 16, 16, 128) 0
2D)
conv2d_3 (Conv2D) (None, 16, 16, 256) 295168
max_pooling2d_3 (MaxPooling (None, 8, 8, 256) 0
2D)
dropout (Dropout) (None, 8, 8, 256) 0
flatten (Flatten) (None, 16384) 0
dense (Dense) (None, 512) 8389120
dropout_1 (Dropout) (None, 512) 0
dense_1 (Dense) (None, 15) 7695
=================================================================
Total params: 8,784,655
Trainable params: 8,784,655
Non-trainable params: 0
_________________________________________________________________
FITTING THE DATA TO THE CONV2D MODEL
Fit the data using model.fit. Apply early stopping to the model as well to try and prevent overfitting of data.
conv2D_128History = conv2DModel_128.fit(X_train_128, y_train_128, epochs=50, validation_data=(X_val_128, y_val_128), batch_size=32, callbacks=EarlyStopping(monitor='val_accuracy', patience=10, restore_best_weights=True))
Epoch 1/50 448/448 [==============================] - 6s 12ms/step - loss: 2.5655 - accuracy: 0.1459 - val_loss: 2.3099 - val_accuracy: 0.2657 Epoch 2/50 448/448 [==============================] - 5s 11ms/step - loss: 1.8597 - accuracy: 0.4133 - val_loss: 1.3487 - val_accuracy: 0.5903 Epoch 3/50 448/448 [==============================] - 5s 11ms/step - loss: 1.1049 - accuracy: 0.6539 - val_loss: 0.8370 - val_accuracy: 0.7473 Epoch 4/50 448/448 [==============================] - 5s 11ms/step - loss: 0.6697 - accuracy: 0.7888 - val_loss: 0.6888 - val_accuracy: 0.7903 Epoch 5/50 448/448 [==============================] - 5s 11ms/step - loss: 0.4343 - accuracy: 0.8647 - val_loss: 0.4472 - val_accuracy: 0.8657 Epoch 6/50 448/448 [==============================] - 5s 11ms/step - loss: 0.3028 - accuracy: 0.9048 - val_loss: 0.4593 - val_accuracy: 0.8677 Epoch 7/50 448/448 [==============================] - 5s 11ms/step - loss: 0.2343 - accuracy: 0.9252 - val_loss: 0.4330 - val_accuracy: 0.8807 Epoch 8/50 448/448 [==============================] - 5s 11ms/step - loss: 0.1991 - accuracy: 0.9354 - val_loss: 0.3865 - val_accuracy: 0.8857 Epoch 9/50 448/448 [==============================] - 5s 11ms/step - loss: 0.1562 - accuracy: 0.9485 - val_loss: 0.3388 - val_accuracy: 0.9080 Epoch 10/50 448/448 [==============================] - 5s 11ms/step - loss: 0.1347 - accuracy: 0.9581 - val_loss: 0.2926 - val_accuracy: 0.9180 Epoch 11/50 448/448 [==============================] - 5s 11ms/step - loss: 0.1186 - accuracy: 0.9613 - val_loss: 0.3303 - val_accuracy: 0.9167 Epoch 12/50 448/448 [==============================] - 5s 11ms/step - loss: 0.1005 - accuracy: 0.9684 - val_loss: 0.3322 - val_accuracy: 0.9150 Epoch 13/50 448/448 [==============================] - 5s 11ms/step - loss: 0.1118 - accuracy: 0.9621 - val_loss: 0.4001 - val_accuracy: 0.9023 Epoch 14/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0775 - accuracy: 0.9750 - val_loss: 0.3662 - val_accuracy: 0.9157 Epoch 15/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0676 - accuracy: 0.9782 - val_loss: 0.3749 - val_accuracy: 0.9060 Epoch 16/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0624 - accuracy: 0.9808 - val_loss: 0.3417 - val_accuracy: 0.9140 Epoch 17/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0560 - accuracy: 0.9808 - val_loss: 0.3371 - val_accuracy: 0.9183 Epoch 18/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0732 - accuracy: 0.9770 - val_loss: 0.3404 - val_accuracy: 0.9243 Epoch 19/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0559 - accuracy: 0.9816 - val_loss: 0.3194 - val_accuracy: 0.9213 Epoch 20/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0549 - accuracy: 0.9826 - val_loss: 0.3146 - val_accuracy: 0.9290 Epoch 21/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0515 - accuracy: 0.9833 - val_loss: 0.3171 - val_accuracy: 0.9247 Epoch 22/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0415 - accuracy: 0.9851 - val_loss: 0.3352 - val_accuracy: 0.9287 Epoch 23/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0463 - accuracy: 0.9856 - val_loss: 0.3470 - val_accuracy: 0.9223 Epoch 24/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0449 - accuracy: 0.9864 - val_loss: 0.3574 - val_accuracy: 0.9243 Epoch 25/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0445 - accuracy: 0.9846 - val_loss: 0.3155 - val_accuracy: 0.9310 Epoch 26/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0385 - accuracy: 0.9876 - val_loss: 0.3212 - val_accuracy: 0.9293 Epoch 27/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0305 - accuracy: 0.9908 - val_loss: 0.3180 - val_accuracy: 0.9287 Epoch 28/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0487 - accuracy: 0.9853 - val_loss: 0.3288 - val_accuracy: 0.9237 Epoch 29/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0286 - accuracy: 0.9908 - val_loss: 0.3813 - val_accuracy: 0.9207 Epoch 30/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0338 - accuracy: 0.9899 - val_loss: 0.3027 - val_accuracy: 0.9307 Epoch 31/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0368 - accuracy: 0.9893 - val_loss: 0.3396 - val_accuracy: 0.9277 Epoch 32/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0276 - accuracy: 0.9910 - val_loss: 0.2955 - val_accuracy: 0.9340 Epoch 33/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0252 - accuracy: 0.9918 - val_loss: 0.2971 - val_accuracy: 0.9370 Epoch 34/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0275 - accuracy: 0.9918 - val_loss: 0.3250 - val_accuracy: 0.9360 Epoch 35/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0294 - accuracy: 0.9913 - val_loss: 0.3679 - val_accuracy: 0.9297 Epoch 36/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0320 - accuracy: 0.9890 - val_loss: 0.3336 - val_accuracy: 0.9310 Epoch 37/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0376 - accuracy: 0.9889 - val_loss: 0.3293 - val_accuracy: 0.9340 Epoch 38/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0258 - accuracy: 0.9920 - val_loss: 0.4145 - val_accuracy: 0.9157 Epoch 39/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0256 - accuracy: 0.9914 - val_loss: 0.2995 - val_accuracy: 0.9367 Epoch 40/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0223 - accuracy: 0.9930 - val_loss: 0.3451 - val_accuracy: 0.9297 Epoch 41/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0285 - accuracy: 0.9916 - val_loss: 0.3243 - val_accuracy: 0.9340 Epoch 42/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0227 - accuracy: 0.9927 - val_loss: 0.3538 - val_accuracy: 0.9353 Epoch 43/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0266 - accuracy: 0.9919 - val_loss: 0.3358 - val_accuracy: 0.9307
EXTRACT THE TRAINING HISTORY OF THE CONV2D MODEL INTO A DICTIONARY
.history to extract the training data from the CONV2D model.pd.concat() to display the results.conv2D_128History = conv2D_128History.history
result_df = compile_results(conv2D_128History, "Conv2DModel_128", 32, result_df)
display(result_df.iloc[5])
Model Name Conv2DModel_128 Epochs 43 Batch Size 32 Train Loss 0.025198 Val Loss 0.297138 Train Acc 0.991832 Val Acc 0.937 [Train - Val] Acc 0.054832 Name: 5, dtype: object
PLOTTING THE LOSS AND ACCURACY CURVE
plot_loss_curve(conv2D_128History)
plt.show()
We will now use our augmented data to train and fit the model and compare the changes in performance to evaluate if augmenting the image data has any impact on the model's performance and how well it can generalize to unseen data.
For augmented data, we will still be testing with both image sizes and evaluate which type of image can perform better using the same model parameters as unaugmented data.
IMAGE SIZE : 31 X 31 PX
conv2DModel_31Augmented = build_cnn_model_31(X_train_31_aug, NUM_CLASS, model_name="Conv2D_31Augmented")
Model: "Conv2D_31Augmented"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 31, 31, 1)] 0
conv2d (Conv2D) (None, 31, 31, 32) 320
max_pooling2d (MaxPooling2D (None, 15, 15, 32) 0
)
conv2d_1 (Conv2D) (None, 15, 15, 64) 18496
max_pooling2d_1 (MaxPooling (None, 7, 7, 64) 0
2D)
conv2d_2 (Conv2D) (None, 7, 7, 128) 73856
max_pooling2d_2 (MaxPooling (None, 3, 3, 128) 0
2D)
dropout (Dropout) (None, 3, 3, 128) 0
flatten (Flatten) (None, 1152) 0
dense (Dense) (None, 256) 295168
dropout_1 (Dropout) (None, 256) 0
dense_1 (Dense) (None, 15) 3855
=================================================================
Total params: 391,695
Trainable params: 391,695
Non-trainable params: 0
_________________________________________________________________
FITTING THE DATA TO THE CONV2D MODEL
Fit the data using model.fit. Apply early stopping to the model as well to try and prevent overfitting of data.
conv2D_31AugmentedHistory = conv2DModel_31Augmented.fit(X_train_31_aug, y_train_31, epochs=50, validation_data=(X_val_31, y_val_31), batch_size=32, callbacks=EarlyStopping(monitor='val_accuracy', patience=10, restore_best_weights=True))
Epoch 1/50 448/448 [==============================] - 3s 6ms/step - loss: 2.6419 - accuracy: 0.1162 - val_loss: 2.5056 - val_accuracy: 0.2067 Epoch 2/50 448/448 [==============================] - 2s 5ms/step - loss: 2.3385 - accuracy: 0.2336 - val_loss: 1.9261 - val_accuracy: 0.4103 Epoch 3/50 448/448 [==============================] - 2s 5ms/step - loss: 1.9361 - accuracy: 0.3645 - val_loss: 1.4715 - val_accuracy: 0.5463 Epoch 4/50 448/448 [==============================] - 2s 5ms/step - loss: 1.6521 - accuracy: 0.4620 - val_loss: 1.2654 - val_accuracy: 0.5997 Epoch 5/50 448/448 [==============================] - 2s 5ms/step - loss: 1.4050 - accuracy: 0.5410 - val_loss: 0.9817 - val_accuracy: 0.7070 Epoch 6/50 448/448 [==============================] - 2s 5ms/step - loss: 1.1379 - accuracy: 0.6275 - val_loss: 0.8974 - val_accuracy: 0.7360 Epoch 7/50 448/448 [==============================] - 2s 5ms/step - loss: 0.9663 - accuracy: 0.6815 - val_loss: 0.7195 - val_accuracy: 0.7773 Epoch 8/50 448/448 [==============================] - 2s 5ms/step - loss: 0.8556 - accuracy: 0.7168 - val_loss: 0.5989 - val_accuracy: 0.8247 Epoch 9/50 448/448 [==============================] - 2s 5ms/step - loss: 0.7184 - accuracy: 0.7667 - val_loss: 0.5517 - val_accuracy: 0.8310 Epoch 10/50 448/448 [==============================] - 2s 5ms/step - loss: 0.6490 - accuracy: 0.7869 - val_loss: 0.6342 - val_accuracy: 0.7937 Epoch 11/50 448/448 [==============================] - 2s 5ms/step - loss: 0.5687 - accuracy: 0.8110 - val_loss: 0.5067 - val_accuracy: 0.8373 Epoch 12/50 448/448 [==============================] - 2s 5ms/step - loss: 0.5002 - accuracy: 0.8303 - val_loss: 0.4673 - val_accuracy: 0.8590 Epoch 13/50 448/448 [==============================] - 2s 5ms/step - loss: 0.4575 - accuracy: 0.8435 - val_loss: 0.4856 - val_accuracy: 0.8510 Epoch 14/50 448/448 [==============================] - 2s 5ms/step - loss: 0.4143 - accuracy: 0.8641 - val_loss: 0.4895 - val_accuracy: 0.8573 Epoch 15/50 448/448 [==============================] - 2s 5ms/step - loss: 0.3817 - accuracy: 0.8753 - val_loss: 0.4265 - val_accuracy: 0.8757 Epoch 16/50 448/448 [==============================] - 2s 5ms/step - loss: 0.3396 - accuracy: 0.8831 - val_loss: 0.4284 - val_accuracy: 0.8727 Epoch 17/50 448/448 [==============================] - 2s 5ms/step - loss: 0.3082 - accuracy: 0.8957 - val_loss: 0.3761 - val_accuracy: 0.8957 Epoch 18/50 448/448 [==============================] - 2s 5ms/step - loss: 0.2933 - accuracy: 0.9005 - val_loss: 0.4077 - val_accuracy: 0.8820 Epoch 19/50 448/448 [==============================] - 2s 5ms/step - loss: 0.2774 - accuracy: 0.9078 - val_loss: 0.4294 - val_accuracy: 0.8807 Epoch 20/50 448/448 [==============================] - 2s 5ms/step - loss: 0.2521 - accuracy: 0.9151 - val_loss: 0.3824 - val_accuracy: 0.8937 Epoch 21/50 448/448 [==============================] - 2s 5ms/step - loss: 0.2487 - accuracy: 0.9180 - val_loss: 0.4733 - val_accuracy: 0.8770 Epoch 22/50 448/448 [==============================] - 2s 5ms/step - loss: 0.2178 - accuracy: 0.9275 - val_loss: 0.4656 - val_accuracy: 0.8740 Epoch 23/50 448/448 [==============================] - 2s 5ms/step - loss: 0.2120 - accuracy: 0.9297 - val_loss: 0.4170 - val_accuracy: 0.8817 Epoch 24/50 448/448 [==============================] - 2s 5ms/step - loss: 0.2015 - accuracy: 0.9314 - val_loss: 0.3626 - val_accuracy: 0.8983 Epoch 25/50 448/448 [==============================] - 2s 5ms/step - loss: 0.2034 - accuracy: 0.9335 - val_loss: 0.4199 - val_accuracy: 0.8840 Epoch 26/50 448/448 [==============================] - 2s 5ms/step - loss: 0.2070 - accuracy: 0.9317 - val_loss: 0.4262 - val_accuracy: 0.8917 Epoch 27/50 448/448 [==============================] - 2s 5ms/step - loss: 0.1820 - accuracy: 0.9385 - val_loss: 0.4439 - val_accuracy: 0.8793 Epoch 28/50 448/448 [==============================] - 2s 5ms/step - loss: 0.1811 - accuracy: 0.9387 - val_loss: 0.3795 - val_accuracy: 0.8937 Epoch 29/50 448/448 [==============================] - 2s 6ms/step - loss: 0.1729 - accuracy: 0.9409 - val_loss: 0.3685 - val_accuracy: 0.8997 Epoch 30/50 448/448 [==============================] - 2s 6ms/step - loss: 0.1604 - accuracy: 0.9435 - val_loss: 0.5273 - val_accuracy: 0.8680 Epoch 31/50 448/448 [==============================] - 3s 6ms/step - loss: 0.1674 - accuracy: 0.9445 - val_loss: 0.3812 - val_accuracy: 0.8990 Epoch 32/50 448/448 [==============================] - 3s 6ms/step - loss: 0.1384 - accuracy: 0.9527 - val_loss: 0.3875 - val_accuracy: 0.9007 Epoch 33/50 448/448 [==============================] - 2s 5ms/step - loss: 0.1369 - accuracy: 0.9542 - val_loss: 0.3972 - val_accuracy: 0.9040 Epoch 34/50 448/448 [==============================] - 2s 6ms/step - loss: 0.1414 - accuracy: 0.9549 - val_loss: 0.4107 - val_accuracy: 0.9023 Epoch 35/50 448/448 [==============================] - 2s 5ms/step - loss: 0.1326 - accuracy: 0.9570 - val_loss: 0.3894 - val_accuracy: 0.9027 Epoch 36/50 448/448 [==============================] - 2s 5ms/step - loss: 0.1408 - accuracy: 0.9525 - val_loss: 0.4608 - val_accuracy: 0.8983 Epoch 37/50 448/448 [==============================] - 2s 6ms/step - loss: 0.1397 - accuracy: 0.9567 - val_loss: 0.5024 - val_accuracy: 0.8843 Epoch 38/50 448/448 [==============================] - 3s 6ms/step - loss: 0.1295 - accuracy: 0.9579 - val_loss: 0.3976 - val_accuracy: 0.9027 Epoch 39/50 448/448 [==============================] - 2s 6ms/step - loss: 0.1427 - accuracy: 0.9543 - val_loss: 0.4041 - val_accuracy: 0.9007 Epoch 40/50 448/448 [==============================] - 3s 6ms/step - loss: 0.1338 - accuracy: 0.9561 - val_loss: 0.3889 - val_accuracy: 0.9073 Epoch 41/50 448/448 [==============================] - 2s 6ms/step - loss: 0.1445 - accuracy: 0.9547 - val_loss: 0.3826 - val_accuracy: 0.9047 Epoch 42/50 448/448 [==============================] - 2s 6ms/step - loss: 0.1331 - accuracy: 0.9566 - val_loss: 0.5350 - val_accuracy: 0.8810 Epoch 43/50 448/448 [==============================] - 2s 5ms/step - loss: 0.1224 - accuracy: 0.9587 - val_loss: 0.4127 - val_accuracy: 0.9127 Epoch 44/50 448/448 [==============================] - 2s 5ms/step - loss: 0.1110 - accuracy: 0.9634 - val_loss: 0.4530 - val_accuracy: 0.9063 Epoch 45/50 448/448 [==============================] - 2s 5ms/step - loss: 0.1189 - accuracy: 0.9637 - val_loss: 0.4596 - val_accuracy: 0.8990 Epoch 46/50 448/448 [==============================] - 2s 5ms/step - loss: 0.1227 - accuracy: 0.9599 - val_loss: 0.3762 - val_accuracy: 0.9100 Epoch 47/50 448/448 [==============================] - 2s 5ms/step - loss: 0.1110 - accuracy: 0.9628 - val_loss: 0.4313 - val_accuracy: 0.9107 Epoch 48/50 448/448 [==============================] - 2s 5ms/step - loss: 0.1028 - accuracy: 0.9658 - val_loss: 0.4370 - val_accuracy: 0.9033 Epoch 49/50 448/448 [==============================] - 2s 5ms/step - loss: 0.0957 - accuracy: 0.9669 - val_loss: 0.4494 - val_accuracy: 0.9043 Epoch 50/50 448/448 [==============================] - 2s 5ms/step - loss: 0.1190 - accuracy: 0.9602 - val_loss: 0.4279 - val_accuracy: 0.9087
EXTRACT THE TRAINING HISTORY OF THE CONV2D MODEL INTO A DICTIONARY
.history to extract the training data from the CONV2D model.pd.concat() to display the results.conv2D_31AugmentedHistory = conv2D_31AugmentedHistory.history
# Get model results
result_df = compile_results(conv2D_31AugmentedHistory, "Conv2DModel_31Augmented", 32, result_df)
display(result_df.iloc[6])
Model Name Conv2DModel_31Augmented Epochs 50 Batch Size 32 Train Loss 0.122438 Val Loss 0.412674 Train Acc 0.958743 Val Acc 0.912667 [Train - Val] Acc 0.046077 Name: 6, dtype: object
PLOTTING THE LOSS AND ACCURACY CURVE
plot_loss_curve(conv2D_31AugmentedHistory)
plt.show()
IMAGE SIZE : 128 X 128 PX
conv2DModel_128Augmented = build_cnn_model_128(X_train_128_aug, NUM_CLASS, model_name="Conv2D_128Augmented")
Model: "Conv2D_128Augmented"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 128, 128, 1)] 0
conv2d (Conv2D) (None, 128, 128, 32) 320
max_pooling2d (MaxPooling2D (None, 64, 64, 32) 0
)
conv2d_1 (Conv2D) (None, 64, 64, 64) 18496
max_pooling2d_1 (MaxPooling (None, 32, 32, 64) 0
2D)
conv2d_2 (Conv2D) (None, 32, 32, 128) 73856
max_pooling2d_2 (MaxPooling (None, 16, 16, 128) 0
2D)
conv2d_3 (Conv2D) (None, 16, 16, 256) 295168
max_pooling2d_3 (MaxPooling (None, 8, 8, 256) 0
2D)
dropout (Dropout) (None, 8, 8, 256) 0
flatten (Flatten) (None, 16384) 0
dense (Dense) (None, 512) 8389120
dropout_1 (Dropout) (None, 512) 0
dense_1 (Dense) (None, 15) 7695
=================================================================
Total params: 8,784,655
Trainable params: 8,784,655
Non-trainable params: 0
_________________________________________________________________
FITTING THE DATA TO THE CONV2D MODEL
Fit the data using model.fit. Apply early stopping to the model as well to try and prevent overfitting of data.
conv2D_128AugmentedHistory = conv2DModel_128Augmented.fit(X_train_128_aug, y_train_128, epochs=50, validation_data=(X_val_128, y_val_128), batch_size=32, callbacks=EarlyStopping(monitor='val_accuracy', patience=10, restore_best_weights=True))
Epoch 1/50 448/448 [==============================] - 6s 12ms/step - loss: 2.5539 - accuracy: 0.1499 - val_loss: 1.9559 - val_accuracy: 0.4030 Epoch 2/50 448/448 [==============================] - 5s 11ms/step - loss: 1.9724 - accuracy: 0.3574 - val_loss: 1.7035 - val_accuracy: 0.4477 Epoch 3/50 448/448 [==============================] - 5s 11ms/step - loss: 1.3869 - accuracy: 0.5546 - val_loss: 1.0818 - val_accuracy: 0.6647 Epoch 4/50 448/448 [==============================] - 5s 11ms/step - loss: 0.9240 - accuracy: 0.7013 - val_loss: 0.6184 - val_accuracy: 0.8047 Epoch 5/50 448/448 [==============================] - 5s 11ms/step - loss: 0.6358 - accuracy: 0.7940 - val_loss: 0.5708 - val_accuracy: 0.8280 Epoch 6/50 448/448 [==============================] - 5s 11ms/step - loss: 0.4769 - accuracy: 0.8447 - val_loss: 0.4165 - val_accuracy: 0.8703 Epoch 7/50 448/448 [==============================] - 5s 11ms/step - loss: 0.3561 - accuracy: 0.8794 - val_loss: 0.3050 - val_accuracy: 0.9033 Epoch 8/50 448/448 [==============================] - 5s 11ms/step - loss: 0.2742 - accuracy: 0.9113 - val_loss: 0.3707 - val_accuracy: 0.8830 Epoch 9/50 448/448 [==============================] - 5s 11ms/step - loss: 0.2353 - accuracy: 0.9231 - val_loss: 0.3209 - val_accuracy: 0.9083 Epoch 10/50 448/448 [==============================] - 5s 11ms/step - loss: 0.1703 - accuracy: 0.9438 - val_loss: 0.2848 - val_accuracy: 0.9153 Epoch 11/50 448/448 [==============================] - 5s 11ms/step - loss: 0.1681 - accuracy: 0.9463 - val_loss: 0.2695 - val_accuracy: 0.9207 Epoch 12/50 448/448 [==============================] - 5s 11ms/step - loss: 0.1347 - accuracy: 0.9564 - val_loss: 0.3188 - val_accuracy: 0.9093 Epoch 13/50 448/448 [==============================] - 5s 11ms/step - loss: 0.1279 - accuracy: 0.9590 - val_loss: 0.2482 - val_accuracy: 0.9313 Epoch 14/50 448/448 [==============================] - 5s 11ms/step - loss: 0.1182 - accuracy: 0.9640 - val_loss: 0.3365 - val_accuracy: 0.9087 Epoch 15/50 448/448 [==============================] - 5s 11ms/step - loss: 0.1187 - accuracy: 0.9630 - val_loss: 0.2623 - val_accuracy: 0.9267 Epoch 16/50 448/448 [==============================] - 5s 11ms/step - loss: 0.1060 - accuracy: 0.9666 - val_loss: 0.2371 - val_accuracy: 0.9357 Epoch 17/50 448/448 [==============================] - 5s 11ms/step - loss: 0.1073 - accuracy: 0.9672 - val_loss: 0.2397 - val_accuracy: 0.9370 Epoch 18/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0989 - accuracy: 0.9682 - val_loss: 0.2671 - val_accuracy: 0.9267 Epoch 19/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0889 - accuracy: 0.9708 - val_loss: 0.2576 - val_accuracy: 0.9323 Epoch 20/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0740 - accuracy: 0.9760 - val_loss: 0.2306 - val_accuracy: 0.9413 Epoch 21/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0785 - accuracy: 0.9756 - val_loss: 0.2096 - val_accuracy: 0.9407 Epoch 22/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0629 - accuracy: 0.9800 - val_loss: 0.2866 - val_accuracy: 0.9257 Epoch 23/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0604 - accuracy: 0.9807 - val_loss: 0.2413 - val_accuracy: 0.9407 Epoch 24/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0547 - accuracy: 0.9825 - val_loss: 0.2577 - val_accuracy: 0.9440 Epoch 25/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0649 - accuracy: 0.9790 - val_loss: 0.2422 - val_accuracy: 0.9443 Epoch 26/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0630 - accuracy: 0.9794 - val_loss: 0.2960 - val_accuracy: 0.9300 Epoch 27/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0727 - accuracy: 0.9775 - val_loss: 0.2404 - val_accuracy: 0.9430 Epoch 28/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0653 - accuracy: 0.9805 - val_loss: 0.2501 - val_accuracy: 0.9380 Epoch 29/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0567 - accuracy: 0.9818 - val_loss: 0.2636 - val_accuracy: 0.9417 Epoch 30/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0600 - accuracy: 0.9828 - val_loss: 0.2305 - val_accuracy: 0.9457 Epoch 31/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0444 - accuracy: 0.9875 - val_loss: 0.2356 - val_accuracy: 0.9460 Epoch 32/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0456 - accuracy: 0.9861 - val_loss: 0.2772 - val_accuracy: 0.9397 Epoch 33/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0409 - accuracy: 0.9872 - val_loss: 0.2298 - val_accuracy: 0.9457 Epoch 34/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0388 - accuracy: 0.9882 - val_loss: 0.2637 - val_accuracy: 0.9437 Epoch 35/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0433 - accuracy: 0.9872 - val_loss: 0.2160 - val_accuracy: 0.9533 Epoch 36/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0411 - accuracy: 0.9874 - val_loss: 0.2291 - val_accuracy: 0.9460 Epoch 37/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0510 - accuracy: 0.9849 - val_loss: 0.2190 - val_accuracy: 0.9470 Epoch 38/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0488 - accuracy: 0.9848 - val_loss: 0.2394 - val_accuracy: 0.9453 Epoch 39/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0570 - accuracy: 0.9824 - val_loss: 0.2192 - val_accuracy: 0.9450 Epoch 40/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0353 - accuracy: 0.9884 - val_loss: 0.2207 - val_accuracy: 0.9477 Epoch 41/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0454 - accuracy: 0.9872 - val_loss: 0.3073 - val_accuracy: 0.9353 Epoch 42/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0383 - accuracy: 0.9883 - val_loss: 0.2363 - val_accuracy: 0.9460 Epoch 43/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0297 - accuracy: 0.9911 - val_loss: 0.2259 - val_accuracy: 0.9447 Epoch 44/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0513 - accuracy: 0.9862 - val_loss: 0.2641 - val_accuracy: 0.9400 Epoch 45/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0445 - accuracy: 0.9876 - val_loss: 0.2300 - val_accuracy: 0.9447
EXTRACT THE TRAINING HISTORY OF THE CONV2D MODEL INTO A DICTIONARY
.history to extract the training data from the CONV2D model.pd.concat() to display the results.conv2D_128AugmentedHistory = conv2D_128AugmentedHistory.history
# Get model results
result_df = compile_results(conv2D_128AugmentedHistory, "Conv2DModel_128Augmented", 32, result_df)
display(result_df.iloc[7])
Model Name Conv2DModel_128Augmented Epochs 45 Batch Size 32 Train Loss 0.043292 Val Loss 0.216016 Train Acc 0.987155 Val Acc 0.953333 [Train - Val] Acc 0.033822 Name: 7, dtype: object
PLOTTING THE LOSS AND ACCURACY CURVE
plot_loss_curve(conv2D_128AugmentedHistory)
plt.show()
From the Conv2D model for both image sizes,
Image Size with Higher Validation Accuracy & Lower Loss : 128 x 128 px Images + No Augmentation
Augmented Vs Non-Augmented Data : Non-Augmented Data performed better for this model due to it being able to achieve higher validation accuracies and achieve lower losses, indicating that original + oversampled image data actually managed to generalize better to the data. However, to confirm if this is the case for all case scenarios, we will continue testing on other models.
BUILDING THE CUSTOMVGG MODEL
For our modified VGG model, the VGG network is built based on blocks. Each block contains 2/3 layers of Conv2D and a MaxPooling2D layer. We will also be applying BatchNormalization() to the block. After the main VGG block has been created, there is a GlobalAveragePooling2D(), and one Dense layer with relu activation function, before dropout and then the output layer.
def vgg_block(num_convs, num_channels):
block = Sequential()
for _ in range(num_convs):
block.add(Conv2D(num_channels, kernel_size=3, padding='same', activation='relu'))
block.add(BatchNormalization())
block.add(MaxPool2D(pool_size = 2, strides = 2))
return block
To train the CustomVGG model, we will first use our unaugmented data to train and fit the model. For this model, we will continue testing with both image sizes and augmented V.S. unaugmented data.
IMAGE SIZE : 31 X 31 PX
# Building the CustomVGG Model - Without Data Augmentation
# Clear the previous tensorflow session
tf.keras.backend.clear_session()
# Define the input shape and create the input tensor
input_shape = (X_train_31.shape[1], X_train_31.shape[2],X_train_31.shape[3])
inputs = Input(shape=input_shape)
# Building the model
x = vgg_block(2, 32)(inputs) # Use less filters compared to VGG16
x = vgg_block(2, 64)(x)
x = vgg_block(2, 128)(x) # Reduced depth and number of filters
x = GlobalAveragePooling2D()(x)
x = Dense(256, activation='relu')(x)
x = Dropout(0.3)(x)
x = Dense(NUM_CLASS, activation='softmax')(x) # Output layer for 15 classes
# Creating the model
customVGGModel_31 = Model(inputs=inputs, outputs=x, name='CustomVGG_31')
# Compiling the model
customVGGModel_31.compile(optimizer=SGD(learning_rate=0.01, momentum=0.9), loss='categorical_crossentropy', metrics=['accuracy'])
customVGGModel_31.summary()
Model: "CustomVGG_31"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 31, 31, 1)] 0
sequential (Sequential) (None, 15, 15, 32) 9824
sequential_1 (Sequential) (None, 7, 7, 64) 55936
sequential_2 (Sequential) (None, 3, 3, 128) 222464
global_average_pooling2d (G (None, 128) 0
lobalAveragePooling2D)
dense (Dense) (None, 256) 33024
dropout (Dropout) (None, 256) 0
dense_1 (Dense) (None, 15) 3855
=================================================================
Total params: 325,103
Trainable params: 324,207
Non-trainable params: 896
_________________________________________________________________
FITTING THE DATA TO THE CUSTOMVGG MODEL
Fit the data using model.fit. Apply early stopping to the model as well to try and prevent overfitting of data.
customVGG_31History = customVGGModel_31.fit(X_train_31, y_train_31, epochs=50, validation_data=(X_val_31, y_val_31), batch_size=32, callbacks=EarlyStopping(monitor='val_accuracy', patience=10, restore_best_weights=True))
Epoch 1/50 448/448 [==============================] - 5s 9ms/step - loss: 1.3451 - accuracy: 0.5630 - val_loss: 3.3539 - val_accuracy: 0.2260 Epoch 2/50 448/448 [==============================] - 4s 8ms/step - loss: 0.5786 - accuracy: 0.8118 - val_loss: 0.9585 - val_accuracy: 0.7067 Epoch 3/50 448/448 [==============================] - 4s 8ms/step - loss: 0.3339 - accuracy: 0.8938 - val_loss: 0.8140 - val_accuracy: 0.7497 Epoch 4/50 448/448 [==============================] - 4s 8ms/step - loss: 0.2158 - accuracy: 0.9284 - val_loss: 0.5879 - val_accuracy: 0.8263 Epoch 5/50 448/448 [==============================] - 4s 8ms/step - loss: 0.1333 - accuracy: 0.9586 - val_loss: 0.4883 - val_accuracy: 0.8687 Epoch 6/50 448/448 [==============================] - 4s 9ms/step - loss: 0.0925 - accuracy: 0.9694 - val_loss: 0.3855 - val_accuracy: 0.8913 Epoch 7/50 448/448 [==============================] - 4s 8ms/step - loss: 0.0724 - accuracy: 0.9770 - val_loss: 0.7984 - val_accuracy: 0.8003 Epoch 8/50 448/448 [==============================] - 4s 8ms/step - loss: 0.0555 - accuracy: 0.9824 - val_loss: 0.3045 - val_accuracy: 0.9190 Epoch 9/50 448/448 [==============================] - 4s 8ms/step - loss: 0.0477 - accuracy: 0.9860 - val_loss: 0.2613 - val_accuracy: 0.9290 Epoch 10/50 448/448 [==============================] - 4s 8ms/step - loss: 0.0387 - accuracy: 0.9870 - val_loss: 0.3697 - val_accuracy: 0.9043 Epoch 11/50 448/448 [==============================] - 4s 9ms/step - loss: 0.0272 - accuracy: 0.9924 - val_loss: 0.2813 - val_accuracy: 0.9257 Epoch 12/50 448/448 [==============================] - 4s 9ms/step - loss: 0.0231 - accuracy: 0.9926 - val_loss: 0.3769 - val_accuracy: 0.9077 Epoch 13/50 448/448 [==============================] - 4s 9ms/step - loss: 0.0179 - accuracy: 0.9936 - val_loss: 0.2064 - val_accuracy: 0.9483 Epoch 14/50 448/448 [==============================] - 4s 9ms/step - loss: 0.0225 - accuracy: 0.9931 - val_loss: 0.2437 - val_accuracy: 0.9383 Epoch 15/50 448/448 [==============================] - 4s 9ms/step - loss: 0.0168 - accuracy: 0.9948 - val_loss: 0.1917 - val_accuracy: 0.9560 Epoch 16/50 448/448 [==============================] - 4s 9ms/step - loss: 0.0221 - accuracy: 0.9930 - val_loss: 0.2170 - val_accuracy: 0.9493 Epoch 17/50 448/448 [==============================] - 4s 8ms/step - loss: 0.0181 - accuracy: 0.9946 - val_loss: 0.2697 - val_accuracy: 0.9400 Epoch 18/50 448/448 [==============================] - 4s 9ms/step - loss: 0.0135 - accuracy: 0.9961 - val_loss: 0.2304 - val_accuracy: 0.9500 Epoch 19/50 448/448 [==============================] - 4s 8ms/step - loss: 0.0144 - accuracy: 0.9957 - val_loss: 0.2267 - val_accuracy: 0.9407 Epoch 20/50 448/448 [==============================] - 4s 8ms/step - loss: 0.0156 - accuracy: 0.9949 - val_loss: 0.3186 - val_accuracy: 0.9247 Epoch 21/50 448/448 [==============================] - 4s 8ms/step - loss: 0.0136 - accuracy: 0.9960 - val_loss: 0.2033 - val_accuracy: 0.9550 Epoch 22/50 448/448 [==============================] - 4s 8ms/step - loss: 0.0088 - accuracy: 0.9975 - val_loss: 0.2313 - val_accuracy: 0.9433 Epoch 23/50 448/448 [==============================] - 4s 9ms/step - loss: 0.0143 - accuracy: 0.9953 - val_loss: 0.2390 - val_accuracy: 0.9433 Epoch 24/50 448/448 [==============================] - 4s 9ms/step - loss: 0.0067 - accuracy: 0.9983 - val_loss: 0.1693 - val_accuracy: 0.9623 Epoch 25/50 448/448 [==============================] - 4s 9ms/step - loss: 0.0054 - accuracy: 0.9984 - val_loss: 0.1610 - val_accuracy: 0.9627 Epoch 26/50 448/448 [==============================] - 4s 8ms/step - loss: 0.0032 - accuracy: 0.9991 - val_loss: 0.1698 - val_accuracy: 0.9597 Epoch 27/50 448/448 [==============================] - 4s 8ms/step - loss: 0.0042 - accuracy: 0.9987 - val_loss: 0.2020 - val_accuracy: 0.9563 Epoch 28/50 448/448 [==============================] - 4s 8ms/step - loss: 0.0034 - accuracy: 0.9989 - val_loss: 0.2288 - val_accuracy: 0.9523 Epoch 29/50 448/448 [==============================] - 4s 8ms/step - loss: 0.0050 - accuracy: 0.9982 - val_loss: 0.1913 - val_accuracy: 0.9580 Epoch 30/50 448/448 [==============================] - 4s 8ms/step - loss: 0.0059 - accuracy: 0.9980 - val_loss: 0.3467 - val_accuracy: 0.9143 Epoch 31/50 448/448 [==============================] - 4s 8ms/step - loss: 0.0049 - accuracy: 0.9983 - val_loss: 0.2090 - val_accuracy: 0.9557 Epoch 32/50 448/448 [==============================] - 4s 8ms/step - loss: 0.0061 - accuracy: 0.9984 - val_loss: 0.1853 - val_accuracy: 0.9590 Epoch 33/50 448/448 [==============================] - 4s 8ms/step - loss: 0.0026 - accuracy: 0.9996 - val_loss: 0.1822 - val_accuracy: 0.9603 Epoch 34/50 448/448 [==============================] - 4s 9ms/step - loss: 0.0053 - accuracy: 0.9986 - val_loss: 0.2688 - val_accuracy: 0.9463 Epoch 35/50 448/448 [==============================] - 4s 9ms/step - loss: 0.0039 - accuracy: 0.9990 - val_loss: 0.1850 - val_accuracy: 0.9627
EXTRACT THE TRAINING HISTORY OF THE CUSTOMVGG MODEL INTO A DICTIONARY
.history to extract the training data from the CustomVGG model.pd.concat() to display the results.customVGG_31History = customVGG_31History.history
# Get model results
result_df = compile_results(customVGG_31History, "customVGGModel_31", 32, result_df)
display(result_df.iloc[8])
Model Name customVGGModel_31 Epochs 35 Batch Size 32 Train Loss 0.005419 Val Loss 0.161012 Train Acc 0.998394 Val Acc 0.962667 [Train - Val] Acc 0.035728 Name: 8, dtype: object
PLOTTING THE LOSS AND ACCURACY CURVE
# Plot the loss and accuracy curve
plot_loss_curve(customVGG_31History)
plt.show()
IMAGE SIZE : 128 X 128 PX
# Building the CustomVGG Model - Without Data Augmentation
# Clear the previous tensorflow session
tf.keras.backend.clear_session()
# Define the input shape and create the input tensor
input_shape = (X_train_128.shape[1], X_train_128.shape[2],X_train_128.shape[3])
inputs = Input(shape=input_shape)
# Building the model
x = vgg_block(2, 32)(inputs) # Use less filters compared to VGG16
x = vgg_block(2, 64)(x)
x = vgg_block(2, 256)(x)
x = GlobalAveragePooling2D()(x)
x = Dense(256, 'relu')(x)
x = Dropout(0.3)(x)
x = Dense(NUM_CLASS, 'softmax')(x)
# Creating the model
customVGGModel_128 = Model(inputs=inputs, outputs=x, name='CustomVGG_128')
# Compiling the model
customVGGModel_128.compile(optimizer=SGD(learning_rate=0.01, momentum=0.9), loss='categorical_crossentropy', metrics=['accuracy'])
customVGGModel_128.summary()
Model: "CustomVGG_128"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 128, 128, 1)] 0
sequential (Sequential) (None, 64, 64, 32) 9824
sequential_1 (Sequential) (None, 32, 32, 64) 55936
sequential_2 (Sequential) (None, 16, 16, 256) 739840
global_average_pooling2d (G (None, 256) 0
lobalAveragePooling2D)
dense (Dense) (None, 256) 65792
dropout (Dropout) (None, 256) 0
dense_1 (Dense) (None, 15) 3855
=================================================================
Total params: 875,247
Trainable params: 873,839
Non-trainable params: 1,408
_________________________________________________________________
FITTING THE DATA TO THE CUSTOMVGG MODEL
Fit the data using model.fit. Apply early stopping to the model as well to try and prevent overfitting of data.
customVGG_128History = customVGGModel_128.fit(X_train_128, y_train_128, epochs=50, validation_data=(X_val_128, y_val_128), batch_size=32, callbacks=EarlyStopping(monitor='val_accuracy', patience=10, restore_best_weights=True))
Epoch 1/50 448/448 [==============================] - 16s 33ms/step - loss: 1.2935 - accuracy: 0.5886 - val_loss: 3.2591 - val_accuracy: 0.2140 Epoch 2/50 448/448 [==============================] - 14s 30ms/step - loss: 0.5626 - accuracy: 0.8214 - val_loss: 4.7165 - val_accuracy: 0.3513 Epoch 3/50 448/448 [==============================] - 14s 31ms/step - loss: 0.3272 - accuracy: 0.8978 - val_loss: 0.7192 - val_accuracy: 0.8113 Epoch 4/50 448/448 [==============================] - 14s 32ms/step - loss: 0.2216 - accuracy: 0.9328 - val_loss: 10.4064 - val_accuracy: 0.2087 Epoch 5/50 448/448 [==============================] - 14s 31ms/step - loss: 0.1523 - accuracy: 0.9535 - val_loss: 0.5569 - val_accuracy: 0.8493 Epoch 6/50 448/448 [==============================] - 14s 31ms/step - loss: 0.1149 - accuracy: 0.9662 - val_loss: 1.2953 - val_accuracy: 0.7037 Epoch 7/50 448/448 [==============================] - 14s 31ms/step - loss: 0.0854 - accuracy: 0.9740 - val_loss: 0.6285 - val_accuracy: 0.8400 Epoch 8/50 448/448 [==============================] - 14s 31ms/step - loss: 0.0754 - accuracy: 0.9779 - val_loss: 0.1177 - val_accuracy: 0.9683 Epoch 9/50 448/448 [==============================] - 14s 31ms/step - loss: 0.0498 - accuracy: 0.9864 - val_loss: 0.1968 - val_accuracy: 0.9383 Epoch 10/50 448/448 [==============================] - 14s 30ms/step - loss: 0.0447 - accuracy: 0.9865 - val_loss: 0.0696 - val_accuracy: 0.9800 Epoch 11/50 448/448 [==============================] - 14s 31ms/step - loss: 0.0305 - accuracy: 0.9916 - val_loss: 0.3304 - val_accuracy: 0.9057 Epoch 12/50 448/448 [==============================] - 14s 32ms/step - loss: 0.0300 - accuracy: 0.9917 - val_loss: 1.1314 - val_accuracy: 0.8273 Epoch 13/50 448/448 [==============================] - 14s 31ms/step - loss: 0.0283 - accuracy: 0.9910 - val_loss: 0.2479 - val_accuracy: 0.9277 Epoch 14/50 448/448 [==============================] - 14s 30ms/step - loss: 0.0250 - accuracy: 0.9930 - val_loss: 0.1341 - val_accuracy: 0.9613 Epoch 15/50 448/448 [==============================] - 14s 31ms/step - loss: 0.0190 - accuracy: 0.9953 - val_loss: 0.1351 - val_accuracy: 0.9613 Epoch 16/50 448/448 [==============================] - 14s 31ms/step - loss: 0.0173 - accuracy: 0.9944 - val_loss: 0.0953 - val_accuracy: 0.9737 Epoch 17/50 448/448 [==============================] - 14s 31ms/step - loss: 0.0197 - accuracy: 0.9940 - val_loss: 0.0656 - val_accuracy: 0.9833 Epoch 18/50 448/448 [==============================] - 14s 32ms/step - loss: 0.0153 - accuracy: 0.9955 - val_loss: 0.1096 - val_accuracy: 0.9673 Epoch 19/50 448/448 [==============================] - 14s 32ms/step - loss: 0.0086 - accuracy: 0.9981 - val_loss: 0.0642 - val_accuracy: 0.9807 Epoch 20/50 448/448 [==============================] - 14s 32ms/step - loss: 0.0128 - accuracy: 0.9965 - val_loss: 0.2010 - val_accuracy: 0.9483 Epoch 21/50 448/448 [==============================] - 13s 30ms/step - loss: 0.0092 - accuracy: 0.9973 - val_loss: 0.2537 - val_accuracy: 0.9383 Epoch 22/50 448/448 [==============================] - 13s 30ms/step - loss: 0.0089 - accuracy: 0.9973 - val_loss: 0.0532 - val_accuracy: 0.9830 Epoch 23/50 448/448 [==============================] - 13s 30ms/step - loss: 0.0103 - accuracy: 0.9973 - val_loss: 0.0671 - val_accuracy: 0.9797 Epoch 24/50 448/448 [==============================] - 13s 30ms/step - loss: 0.0071 - accuracy: 0.9984 - val_loss: 0.1397 - val_accuracy: 0.9690 Epoch 25/50 448/448 [==============================] - 13s 30ms/step - loss: 0.0080 - accuracy: 0.9981 - val_loss: 0.0564 - val_accuracy: 0.9867 Epoch 26/50 448/448 [==============================] - 13s 30ms/step - loss: 0.0059 - accuracy: 0.9983 - val_loss: 0.1235 - val_accuracy: 0.9653 Epoch 27/50 448/448 [==============================] - 13s 30ms/step - loss: 0.0053 - accuracy: 0.9988 - val_loss: 0.1252 - val_accuracy: 0.9633 Epoch 28/50 448/448 [==============================] - 13s 30ms/step - loss: 0.0053 - accuracy: 0.9985 - val_loss: 0.0597 - val_accuracy: 0.9823 Epoch 29/50 448/448 [==============================] - 13s 30ms/step - loss: 0.0056 - accuracy: 0.9985 - val_loss: 0.1479 - val_accuracy: 0.9603 Epoch 30/50 448/448 [==============================] - 13s 30ms/step - loss: 0.0100 - accuracy: 0.9971 - val_loss: 0.1760 - val_accuracy: 0.9610 Epoch 31/50 448/448 [==============================] - 14s 30ms/step - loss: 0.0049 - accuracy: 0.9987 - val_loss: 0.0356 - val_accuracy: 0.9913 Epoch 32/50 448/448 [==============================] - 14s 30ms/step - loss: 0.0036 - accuracy: 0.9992 - val_loss: 0.0280 - val_accuracy: 0.9927 Epoch 33/50 448/448 [==============================] - 14s 31ms/step - loss: 0.0032 - accuracy: 0.9992 - val_loss: 0.0577 - val_accuracy: 0.9857 Epoch 34/50 448/448 [==============================] - 14s 31ms/step - loss: 0.0074 - accuracy: 0.9980 - val_loss: 0.0550 - val_accuracy: 0.9873 Epoch 35/50 448/448 [==============================] - 14s 31ms/step - loss: 0.0035 - accuracy: 0.9992 - val_loss: 0.0323 - val_accuracy: 0.9927 Epoch 36/50 448/448 [==============================] - 14s 30ms/step - loss: 0.0020 - accuracy: 0.9996 - val_loss: 0.0455 - val_accuracy: 0.9883 Epoch 37/50 448/448 [==============================] - 14s 31ms/step - loss: 0.0020 - accuracy: 0.9995 - val_loss: 0.2178 - val_accuracy: 0.9560 Epoch 38/50 448/448 [==============================] - 14s 31ms/step - loss: 0.0037 - accuracy: 0.9990 - val_loss: 0.1381 - val_accuracy: 0.9633 Epoch 39/50 448/448 [==============================] - 14s 31ms/step - loss: 0.0023 - accuracy: 0.9995 - val_loss: 0.0982 - val_accuracy: 0.9783 Epoch 40/50 448/448 [==============================] - 14s 32ms/step - loss: 0.0029 - accuracy: 0.9995 - val_loss: 0.0238 - val_accuracy: 0.9923 Epoch 41/50 448/448 [==============================] - 14s 32ms/step - loss: 0.0011 - accuracy: 0.9999 - val_loss: 0.5652 - val_accuracy: 0.9107 Epoch 42/50 448/448 [==============================] - 14s 31ms/step - loss: 0.0016 - accuracy: 0.9998 - val_loss: 0.0250 - val_accuracy: 0.9937 Epoch 43/50 448/448 [==============================] - 14s 31ms/step - loss: 0.0017 - accuracy: 0.9997 - val_loss: 0.0276 - val_accuracy: 0.9923 Epoch 44/50 448/448 [==============================] - 14s 31ms/step - loss: 0.0016 - accuracy: 0.9996 - val_loss: 0.0377 - val_accuracy: 0.9903 Epoch 45/50 448/448 [==============================] - 14s 31ms/step - loss: 0.0031 - accuracy: 0.9992 - val_loss: 0.2726 - val_accuracy: 0.9377 Epoch 46/50 448/448 [==============================] - 14s 31ms/step - loss: 0.0031 - accuracy: 0.9992 - val_loss: 0.0490 - val_accuracy: 0.9903 Epoch 47/50 448/448 [==============================] - 14s 31ms/step - loss: 0.0029 - accuracy: 0.9990 - val_loss: 0.0278 - val_accuracy: 0.9937 Epoch 48/50 448/448 [==============================] - 14s 31ms/step - loss: 0.0022 - accuracy: 0.9994 - val_loss: 0.3092 - val_accuracy: 0.9313 Epoch 49/50 448/448 [==============================] - 14s 30ms/step - loss: 0.0017 - accuracy: 0.9996 - val_loss: 0.0288 - val_accuracy: 0.9937 Epoch 50/50 448/448 [==============================] - 13s 30ms/step - loss: 0.0014 - accuracy: 0.9998 - val_loss: 0.0352 - val_accuracy: 0.9917
EXTRACT THE TRAINING HISTORY OF THE CUSTOMVGG MODEL INTO A DICTIONARY
.history to extract the training data from the CustomVGG model.pd.concat() to display the results.customVGG_128History = customVGG_128History.history
# Get model results
result_df = compile_results(customVGG_128History, "customVGGModel_128", 32, result_df)
display(result_df.iloc[9])
Model Name customVGGModel_128 Epochs 50 Batch Size 32 Train Loss 0.001637 Val Loss 0.02503 Train Acc 0.999791 Val Acc 0.993667 [Train - Val] Acc 0.006124 Name: 9, dtype: object
PLOTTING THE LOSS AND ACCURACY CURVE
# Plot the loss and accuracy curve
plot_loss_curve(customVGG_128History)
plt.show()
REGULARIZATION : L2 LASSO REGULARIZATION | IMAGE SIZE : 31 X 31 PX
def vgg_blockL2(num_convs, num_channels, dropout_rate=0.3, weight_decay=1e-4):
block = Sequential()
for _ in range(num_convs):
block.add(Conv2D(num_channels, kernel_size=3, padding='same', activation='relu', kernel_regularizer=l2(weight_decay)))
block.add(BatchNormalization())
block.add(MaxPool2D(pool_size = 2, strides = 2))
if dropout_rate > 0:
block.add(Dropout(dropout_rate))
return block
# Building the L2 CustomVGG Model - Without Data Augmentation
# Clear the previous tensorflow session
tf.keras.backend.clear_session()
# Define the input shape and create the input tensor
input_shape = (X_train_31.shape[1], X_train_31.shape[2],X_train_31.shape[3])
inputs = Input(shape=input_shape)
# Building the model
weight_decay = 1e-4
x = vgg_blockL2(2, 32, dropout_rate=0.3)(inputs)
x = vgg_blockL2(2, 64, dropout_rate=0.3)(x)
x = vgg_blockL2(3, 128, dropout_rate=0.4)(x)
x = GlobalAveragePooling2D()(x)
x = Dense(256, activation='relu', kernel_regularizer=l2(weight_decay))(x)
x = Dropout(0.5)(x)
x = Dense(NUM_CLASS, activation='softmax')(x)
# Creating the model
customVGGModelL2_31 = Model(inputs=inputs, outputs=x, name='CustomVGGL2_31')
# Compiling the model
customVGGModelL2_31.compile(optimizer=SGD(learning_rate=0.01, momentum=0.9), loss='categorical_crossentropy', metrics=['accuracy'])
customVGGModelL2_31.summary()
Model: "CustomVGGL2_31"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 31, 31, 1)] 0
sequential (Sequential) (None, 15, 15, 32) 9824
sequential_1 (Sequential) (None, 7, 7, 64) 55936
sequential_2 (Sequential) (None, 3, 3, 128) 370560
global_average_pooling2d (G (None, 128) 0
lobalAveragePooling2D)
dense (Dense) (None, 256) 33024
dropout_3 (Dropout) (None, 256) 0
dense_1 (Dense) (None, 15) 3855
=================================================================
Total params: 473,199
Trainable params: 472,047
Non-trainable params: 1,152
_________________________________________________________________
FITTING THE DATA TO THE CUSTOMVGG MODEL
Fit the data using model.fit. Apply early stopping to the model as well to try and prevent overfitting of data.
customVGGL2_31History = customVGGModelL2_31.fit(X_train_31, y_train_31, epochs=50, validation_data=(X_val_31, y_val_31), batch_size=32, callbacks= EarlyStopping(monitor='val_loss', patience=10, restore_best_weights=True))
Epoch 1/50 448/448 [==============================] - 6s 11ms/step - loss: 1.8313 - accuracy: 0.4305 - val_loss: 2.4550 - val_accuracy: 0.3063 Epoch 2/50 448/448 [==============================] - 5s 10ms/step - loss: 1.1495 - accuracy: 0.6545 - val_loss: 0.9878 - val_accuracy: 0.7150 Epoch 3/50 448/448 [==============================] - 5s 10ms/step - loss: 0.8794 - accuracy: 0.7432 - val_loss: 1.2305 - val_accuracy: 0.6637 Epoch 4/50 448/448 [==============================] - 5s 10ms/step - loss: 0.7322 - accuracy: 0.7957 - val_loss: 0.5892 - val_accuracy: 0.8480 Epoch 5/50 448/448 [==============================] - 5s 10ms/step - loss: 0.5866 - accuracy: 0.8410 - val_loss: 0.5956 - val_accuracy: 0.8430 Epoch 6/50 448/448 [==============================] - 5s 10ms/step - loss: 0.5072 - accuracy: 0.8696 - val_loss: 0.6388 - val_accuracy: 0.8393 Epoch 7/50 448/448 [==============================] - 5s 10ms/step - loss: 0.4443 - accuracy: 0.8911 - val_loss: 0.5394 - val_accuracy: 0.8653 Epoch 8/50 448/448 [==============================] - 5s 10ms/step - loss: 0.4064 - accuracy: 0.9067 - val_loss: 0.4189 - val_accuracy: 0.9037 Epoch 9/50 448/448 [==============================] - 5s 10ms/step - loss: 0.3731 - accuracy: 0.9172 - val_loss: 0.4092 - val_accuracy: 0.9147 Epoch 10/50 448/448 [==============================] - 5s 10ms/step - loss: 0.3469 - accuracy: 0.9254 - val_loss: 0.5931 - val_accuracy: 0.8540 Epoch 11/50 448/448 [==============================] - 5s 10ms/step - loss: 0.3063 - accuracy: 0.9386 - val_loss: 0.4362 - val_accuracy: 0.9070 Epoch 12/50 448/448 [==============================] - 5s 10ms/step - loss: 0.2913 - accuracy: 0.9471 - val_loss: 0.4163 - val_accuracy: 0.9093 Epoch 13/50 448/448 [==============================] - 5s 10ms/step - loss: 0.2746 - accuracy: 0.9529 - val_loss: 0.3336 - val_accuracy: 0.9407 Epoch 14/50 448/448 [==============================] - 5s 10ms/step - loss: 0.2652 - accuracy: 0.9554 - val_loss: 0.3594 - val_accuracy: 0.9327 Epoch 15/50 448/448 [==============================] - 5s 10ms/step - loss: 0.2507 - accuracy: 0.9602 - val_loss: 0.4377 - val_accuracy: 0.9147 Epoch 16/50 448/448 [==============================] - 5s 10ms/step - loss: 0.2622 - accuracy: 0.9576 - val_loss: 0.3291 - val_accuracy: 0.9477 Epoch 17/50 448/448 [==============================] - 5s 10ms/step - loss: 0.2375 - accuracy: 0.9668 - val_loss: 0.3330 - val_accuracy: 0.9513 Epoch 18/50 448/448 [==============================] - 5s 10ms/step - loss: 0.2287 - accuracy: 0.9688 - val_loss: 0.3222 - val_accuracy: 0.9503 Epoch 19/50 448/448 [==============================] - 5s 10ms/step - loss: 0.2227 - accuracy: 0.9712 - val_loss: 0.3865 - val_accuracy: 0.9330 Epoch 20/50 448/448 [==============================] - 5s 10ms/step - loss: 0.2141 - accuracy: 0.9719 - val_loss: 0.6795 - val_accuracy: 0.8740 Epoch 21/50 448/448 [==============================] - 5s 10ms/step - loss: 0.2042 - accuracy: 0.9754 - val_loss: 0.3168 - val_accuracy: 0.9493 Epoch 22/50 448/448 [==============================] - 5s 10ms/step - loss: 0.2182 - accuracy: 0.9742 - val_loss: 0.3341 - val_accuracy: 0.9440 Epoch 23/50 448/448 [==============================] - 5s 10ms/step - loss: 0.2173 - accuracy: 0.9728 - val_loss: 0.4305 - val_accuracy: 0.9203 Epoch 24/50 448/448 [==============================] - 5s 10ms/step - loss: 0.2157 - accuracy: 0.9732 - val_loss: 0.3373 - val_accuracy: 0.9533 Epoch 25/50 448/448 [==============================] - 5s 10ms/step - loss: 0.1993 - accuracy: 0.9803 - val_loss: 0.3319 - val_accuracy: 0.9513 Epoch 26/50 448/448 [==============================] - 5s 10ms/step - loss: 0.1882 - accuracy: 0.9835 - val_loss: 0.3815 - val_accuracy: 0.9390 Epoch 27/50 448/448 [==============================] - 5s 10ms/step - loss: 0.1967 - accuracy: 0.9807 - val_loss: 0.3594 - val_accuracy: 0.9397 Epoch 28/50 448/448 [==============================] - 5s 10ms/step - loss: 0.1975 - accuracy: 0.9789 - val_loss: 0.3601 - val_accuracy: 0.9460 Epoch 29/50 448/448 [==============================] - 5s 10ms/step - loss: 0.1948 - accuracy: 0.9816 - val_loss: 0.3444 - val_accuracy: 0.9477 Epoch 30/50 448/448 [==============================] - 5s 10ms/step - loss: 0.1926 - accuracy: 0.9820 - val_loss: 0.2920 - val_accuracy: 0.9600 Epoch 31/50 448/448 [==============================] - 5s 10ms/step - loss: 0.1906 - accuracy: 0.9822 - val_loss: 0.3739 - val_accuracy: 0.9460 Epoch 32/50 448/448 [==============================] - 5s 10ms/step - loss: 0.1881 - accuracy: 0.9847 - val_loss: 0.3247 - val_accuracy: 0.9537 Epoch 33/50 448/448 [==============================] - 5s 10ms/step - loss: 0.1892 - accuracy: 0.9837 - val_loss: 0.5322 - val_accuracy: 0.9143 Epoch 34/50 448/448 [==============================] - 5s 10ms/step - loss: 0.1941 - accuracy: 0.9816 - val_loss: 0.4008 - val_accuracy: 0.9310 Epoch 35/50 448/448 [==============================] - 5s 10ms/step - loss: 0.1813 - accuracy: 0.9854 - val_loss: 0.3226 - val_accuracy: 0.9580 Epoch 36/50 448/448 [==============================] - 5s 10ms/step - loss: 0.1818 - accuracy: 0.9853 - val_loss: 0.4632 - val_accuracy: 0.9223 Epoch 37/50 448/448 [==============================] - 5s 10ms/step - loss: 0.1787 - accuracy: 0.9867 - val_loss: 0.2788 - val_accuracy: 0.9620 Epoch 38/50 448/448 [==============================] - 5s 10ms/step - loss: 0.1841 - accuracy: 0.9844 - val_loss: 0.2870 - val_accuracy: 0.9627 Epoch 39/50 448/448 [==============================] - 5s 10ms/step - loss: 0.1841 - accuracy: 0.9847 - val_loss: 0.3572 - val_accuracy: 0.9483 Epoch 40/50 448/448 [==============================] - 5s 10ms/step - loss: 0.1827 - accuracy: 0.9858 - val_loss: 0.2910 - val_accuracy: 0.9650 Epoch 41/50 448/448 [==============================] - 5s 10ms/step - loss: 0.1769 - accuracy: 0.9878 - val_loss: 0.3054 - val_accuracy: 0.9580 Epoch 42/50 448/448 [==============================] - 5s 10ms/step - loss: 0.1732 - accuracy: 0.9883 - val_loss: 0.3286 - val_accuracy: 0.9570 Epoch 43/50 448/448 [==============================] - 5s 10ms/step - loss: 0.1664 - accuracy: 0.9905 - val_loss: 0.2733 - val_accuracy: 0.9667 Epoch 44/50 448/448 [==============================] - 5s 10ms/step - loss: 0.1691 - accuracy: 0.9893 - val_loss: 0.3456 - val_accuracy: 0.9597 Epoch 45/50 448/448 [==============================] - 5s 10ms/step - loss: 0.1656 - accuracy: 0.9910 - val_loss: 0.3261 - val_accuracy: 0.9590 Epoch 46/50 448/448 [==============================] - 5s 10ms/step - loss: 0.1689 - accuracy: 0.9889 - val_loss: 0.3127 - val_accuracy: 0.9607 Epoch 47/50 448/448 [==============================] - 5s 10ms/step - loss: 0.1715 - accuracy: 0.9874 - val_loss: 0.2918 - val_accuracy: 0.9637 Epoch 48/50 448/448 [==============================] - 5s 10ms/step - loss: 0.1655 - accuracy: 0.9891 - val_loss: 0.2999 - val_accuracy: 0.9610 Epoch 49/50 448/448 [==============================] - 5s 10ms/step - loss: 0.1626 - accuracy: 0.9895 - val_loss: 0.3599 - val_accuracy: 0.9550 Epoch 50/50 448/448 [==============================] - 5s 10ms/step - loss: 0.1777 - accuracy: 0.9862 - val_loss: 0.3245 - val_accuracy: 0.9587
EXTRACT THE TRAINING HISTORY OF THE CUSTOMVGG MODEL INTO A DICTIONARY
.history to extract the training data from the CustomVGG model.pd.concat() to display the results.customVGGL2_31History = customVGGL2_31History.history
# Get model results
result_df = compile_results(customVGGL2_31History, "customVGGL2Model_31", 32, result_df)
display(result_df.iloc[10])
Model Name customVGGL2Model_31 Epochs 50 Batch Size 32 Train Loss 0.166368 Val Loss 0.273309 Train Acc 0.990506 Val Acc 0.966667 [Train - Val] Acc 0.023839 Name: 10, dtype: object
PLOTTING THE LOSS AND ACCURACY CURVE
# Plot the loss and accuracy curve
plot_loss_curve(customVGGL2_31History)
plt.show()
REGULARIZATION : L2 RIDGE REGULARIZATION | IMAGE SIZE : 128 X 128 PX
# Building the L2 CustomVGG Model - Without Data Augmentation
# Clear the previous tensorflow session
tf.keras.backend.clear_session()
# Define the input shape and create the input tensor
input_shape = (X_train_128.shape[1], X_train_128.shape[2],X_train_128.shape[3])
inputs = Input(shape=input_shape)
# Building the model
weight_decay = 1e-4
x = vgg_blockL2(2, 32, dropout_rate=0.3)(inputs)
x = vgg_blockL2(2, 64, dropout_rate=0.3)(x)
x = vgg_blockL2(3, 256, dropout_rate=0.4)(x)
x = GlobalAveragePooling2D()(x)
x = Dense(256, activation='relu', kernel_regularizer=l2(weight_decay))(x)
x = Dropout(0.5)(x)
x = Dense(NUM_CLASS, activation='softmax')(x)
# Creating the model
customVGGModelL2_128 = Model(inputs=inputs, outputs=x, name='CustomVGGL2_128')
# Compiling the model
customVGGModelL2_128.compile(optimizer=SGD(learning_rate=0.01, momentum=0.9), loss='categorical_crossentropy', metrics=['accuracy'])
customVGGModelL2_128.summary()
Model: "CustomVGGL2_128"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 128, 128, 1)] 0
sequential (Sequential) (None, 64, 64, 32) 9824
sequential_1 (Sequential) (None, 32, 32, 64) 55936
sequential_2 (Sequential) (None, 16, 16, 256) 1330944
global_average_pooling2d (G (None, 256) 0
lobalAveragePooling2D)
dense (Dense) (None, 256) 65792
dropout_3 (Dropout) (None, 256) 0
dense_1 (Dense) (None, 15) 3855
=================================================================
Total params: 1,466,351
Trainable params: 1,464,431
Non-trainable params: 1,920
_________________________________________________________________
FITTING THE DATA TO THE CUSTOMVGG MODEL
Fit the data using model.fit. Apply early stopping to the model as well to try and prevent overfitting of data.
customVGGL2_128History = customVGGModelL2_128.fit(X_train_128, y_train_128, epochs=50, validation_data=(X_val_128, y_val_128), batch_size=32, callbacks=EarlyStopping(monitor='val_accuracy', patience=10, restore_best_weights=True))
Epoch 1/50 448/448 [==============================] - 18s 39ms/step - loss: 1.6708 - accuracy: 0.4958 - val_loss: 4.3749 - val_accuracy: 0.2290 Epoch 2/50 448/448 [==============================] - 17s 38ms/step - loss: 0.9447 - accuracy: 0.7341 - val_loss: 6.2123 - val_accuracy: 0.1887 Epoch 3/50 448/448 [==============================] - 17s 38ms/step - loss: 0.6774 - accuracy: 0.8193 - val_loss: 2.8073 - val_accuracy: 0.5180 Epoch 4/50 448/448 [==============================] - 17s 38ms/step - loss: 0.5278 - accuracy: 0.8686 - val_loss: 0.6849 - val_accuracy: 0.8407 Epoch 5/50 448/448 [==============================] - 17s 38ms/step - loss: 0.4064 - accuracy: 0.9077 - val_loss: 0.6697 - val_accuracy: 0.8377 Epoch 6/50 448/448 [==============================] - 17s 38ms/step - loss: 0.3346 - accuracy: 0.9340 - val_loss: 1.9455 - val_accuracy: 0.6433 Epoch 7/50 448/448 [==============================] - 17s 38ms/step - loss: 0.2883 - accuracy: 0.9478 - val_loss: 0.7041 - val_accuracy: 0.8550 Epoch 8/50 448/448 [==============================] - 17s 38ms/step - loss: 0.2763 - accuracy: 0.9512 - val_loss: 2.8468 - val_accuracy: 0.5367 Epoch 9/50 448/448 [==============================] - 17s 38ms/step - loss: 0.2504 - accuracy: 0.9606 - val_loss: 2.3518 - val_accuracy: 0.6607 Epoch 10/50 448/448 [==============================] - 17s 38ms/step - loss: 0.2233 - accuracy: 0.9673 - val_loss: 0.3106 - val_accuracy: 0.9427 Epoch 11/50 448/448 [==============================] - 17s 38ms/step - loss: 0.2070 - accuracy: 0.9724 - val_loss: 0.6545 - val_accuracy: 0.8740 Epoch 12/50 448/448 [==============================] - 17s 38ms/step - loss: 0.2040 - accuracy: 0.9735 - val_loss: 0.5082 - val_accuracy: 0.9030 Epoch 13/50 448/448 [==============================] - 17s 38ms/step - loss: 0.1890 - accuracy: 0.9793 - val_loss: 1.1372 - val_accuracy: 0.8293 Epoch 14/50 448/448 [==============================] - 17s 38ms/step - loss: 0.1889 - accuracy: 0.9788 - val_loss: 1.5922 - val_accuracy: 0.7320 Epoch 15/50 448/448 [==============================] - 17s 38ms/step - loss: 0.1765 - accuracy: 0.9829 - val_loss: 0.3473 - val_accuracy: 0.9427 Epoch 16/50 448/448 [==============================] - 17s 38ms/step - loss: 0.1722 - accuracy: 0.9824 - val_loss: 0.7379 - val_accuracy: 0.8493 Epoch 17/50 448/448 [==============================] - 17s 38ms/step - loss: 0.1657 - accuracy: 0.9847 - val_loss: 0.1970 - val_accuracy: 0.9770 Epoch 18/50 448/448 [==============================] - 17s 38ms/step - loss: 0.1541 - accuracy: 0.9880 - val_loss: 0.3503 - val_accuracy: 0.9400 Epoch 19/50 448/448 [==============================] - 17s 38ms/step - loss: 0.1499 - accuracy: 0.9900 - val_loss: 0.2614 - val_accuracy: 0.9567 Epoch 20/50 448/448 [==============================] - 17s 38ms/step - loss: 0.1472 - accuracy: 0.9895 - val_loss: 0.7678 - val_accuracy: 0.8737 Epoch 21/50 448/448 [==============================] - 17s 38ms/step - loss: 0.1516 - accuracy: 0.9883 - val_loss: 0.5884 - val_accuracy: 0.8980 Epoch 22/50 448/448 [==============================] - 17s 38ms/step - loss: 0.1518 - accuracy: 0.9875 - val_loss: 0.1740 - val_accuracy: 0.9830 Epoch 23/50 448/448 [==============================] - 17s 39ms/step - loss: 0.1418 - accuracy: 0.9910 - val_loss: 0.2075 - val_accuracy: 0.9717 Epoch 24/50 448/448 [==============================] - 17s 39ms/step - loss: 0.1398 - accuracy: 0.9913 - val_loss: 0.2662 - val_accuracy: 0.9617 Epoch 25/50 448/448 [==============================] - 18s 39ms/step - loss: 0.1304 - accuracy: 0.9935 - val_loss: 0.1469 - val_accuracy: 0.9883 Epoch 26/50 448/448 [==============================] - 17s 38ms/step - loss: 0.1297 - accuracy: 0.9931 - val_loss: 0.1806 - val_accuracy: 0.9793 Epoch 27/50 448/448 [==============================] - 17s 38ms/step - loss: 0.1254 - accuracy: 0.9943 - val_loss: 0.3692 - val_accuracy: 0.9340 Epoch 28/50 448/448 [==============================] - 17s 37ms/step - loss: 0.1421 - accuracy: 0.9892 - val_loss: 0.1682 - val_accuracy: 0.9860 Epoch 29/50 448/448 [==============================] - 17s 37ms/step - loss: 0.1344 - accuracy: 0.9913 - val_loss: 0.1684 - val_accuracy: 0.9820 Epoch 30/50 448/448 [==============================] - 17s 37ms/step - loss: 0.1277 - accuracy: 0.9935 - val_loss: 0.6073 - val_accuracy: 0.8880 Epoch 31/50 448/448 [==============================] - 17s 37ms/step - loss: 0.1271 - accuracy: 0.9922 - val_loss: 0.1630 - val_accuracy: 0.9833 Epoch 32/50 448/448 [==============================] - 17s 38ms/step - loss: 0.1171 - accuracy: 0.9955 - val_loss: 0.1737 - val_accuracy: 0.9817 Epoch 33/50 448/448 [==============================] - 17s 38ms/step - loss: 0.1129 - accuracy: 0.9964 - val_loss: 0.1520 - val_accuracy: 0.9843 Epoch 34/50 448/448 [==============================] - 17s 39ms/step - loss: 0.1280 - accuracy: 0.9910 - val_loss: 0.4265 - val_accuracy: 0.9427 Epoch 35/50 448/448 [==============================] - 17s 38ms/step - loss: 0.1197 - accuracy: 0.9940 - val_loss: 0.2714 - val_accuracy: 0.9643
EXTRACT THE TRAINING HISTORY OF THE CUSTOMVGG MODEL INTO A DICTIONARY
.history to extract the training data from the CustomVGG model.pd.concat() to display the results.customVGGL2_128History = customVGGL2_128History.history
# Get model results
result_df = compile_results(customVGGL2_128History, "customVGGL2Model_128", 32, result_df)
display(result_df.iloc[11])
Model Name customVGGL2Model_128 Epochs 35 Batch Size 32 Train Loss 0.130412 Val Loss 0.146854 Train Acc 0.993508 Val Acc 0.988333 [Train - Val] Acc 0.005175 Name: 11, dtype: object
PLOTTING THE LOSS AND ACCURACY CURVE
# Plot the loss and accuracy curve
plot_loss_curve(customVGGL2_128History)
plt.show()
IMAGE SIZE : 31 X 31 PX
# Building the CustomVGG Model - With Data Augmentation
# Clear the previous tensorflow session
tf.keras.backend.clear_session()
# Define the input shape and create the input tensor
input_shape = (X_train_31.shape[1], X_train_31.shape[2],X_train_31.shape[3])
inputs = Input(shape=input_shape)
# Building the model
x = vgg_block(2, 32)(inputs) # Use less filters compared to VGG16
x = vgg_block(2, 64)(x)
x = vgg_block(2, 128)(x) # Reduced depth and number of filters
x = GlobalAveragePooling2D()(x)
x = Dense(256, activation='relu')(x)
x = Dropout(0.3)(x)
x = Dense(NUM_CLASS, activation='softmax')(x) # Output layer for 15 classes
# Creating the model
customVGGModel_31Augmented = Model(inputs=inputs, outputs=x, name='CustomVGG_31Augmented')
# Compiling the model
customVGGModel_31Augmented.compile(optimizer=SGD(learning_rate=0.01, momentum=0.9), loss='categorical_crossentropy', metrics=['accuracy'])
customVGGModel_31Augmented.summary()
Model: "CustomVGG_31Augmented"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 31, 31, 1)] 0
sequential (Sequential) (None, 15, 15, 32) 9824
sequential_1 (Sequential) (None, 7, 7, 64) 55936
sequential_2 (Sequential) (None, 3, 3, 128) 222464
global_average_pooling2d (G (None, 128) 0
lobalAveragePooling2D)
dense (Dense) (None, 256) 33024
dropout (Dropout) (None, 256) 0
dense_1 (Dense) (None, 15) 3855
=================================================================
Total params: 325,103
Trainable params: 324,207
Non-trainable params: 896
_________________________________________________________________
FITTING THE DATA TO THE CUSTOMVGG MODEL
Fit the data using model.fit. Apply early stopping to the model as well to try and prevent overfitting of data.
customVGGAugmentedHistory_31 = customVGGModel_31Augmented.fit(X_train_31_aug, y_train_31, epochs=50, validation_data=(X_val_31, y_val_31), batch_size=32, callbacks=EarlyStopping(monitor='val_accuracy', patience=10, restore_best_weights=True))
Epoch 1/50 448/448 [==============================] - 5s 9ms/step - loss: 1.6965 - accuracy: 0.4396 - val_loss: 3.8117 - val_accuracy: 0.2127 Epoch 2/50 448/448 [==============================] - 4s 8ms/step - loss: 0.9224 - accuracy: 0.6933 - val_loss: 1.3928 - val_accuracy: 0.5950 Epoch 3/50 448/448 [==============================] - 4s 8ms/step - loss: 0.6057 - accuracy: 0.7982 - val_loss: 1.7652 - val_accuracy: 0.5317 Epoch 4/50 448/448 [==============================] - 4s 8ms/step - loss: 0.4236 - accuracy: 0.8625 - val_loss: 0.5372 - val_accuracy: 0.8333 Epoch 5/50 448/448 [==============================] - 4s 8ms/step - loss: 0.3124 - accuracy: 0.8971 - val_loss: 0.4906 - val_accuracy: 0.8510 Epoch 6/50 448/448 [==============================] - 4s 8ms/step - loss: 0.2250 - accuracy: 0.9270 - val_loss: 0.4439 - val_accuracy: 0.8710 Epoch 7/50 448/448 [==============================] - 4s 8ms/step - loss: 0.1694 - accuracy: 0.9469 - val_loss: 0.3539 - val_accuracy: 0.9027 Epoch 8/50 448/448 [==============================] - 4s 8ms/step - loss: 0.1380 - accuracy: 0.9548 - val_loss: 0.5122 - val_accuracy: 0.8530 Epoch 9/50 448/448 [==============================] - 4s 8ms/step - loss: 0.1085 - accuracy: 0.9641 - val_loss: 0.3233 - val_accuracy: 0.9050 Epoch 10/50 448/448 [==============================] - 4s 8ms/step - loss: 0.0851 - accuracy: 0.9721 - val_loss: 0.2901 - val_accuracy: 0.9167 Epoch 11/50 448/448 [==============================] - 4s 8ms/step - loss: 0.0713 - accuracy: 0.9763 - val_loss: 0.3112 - val_accuracy: 0.9130 Epoch 12/50 448/448 [==============================] - 4s 8ms/step - loss: 0.0632 - accuracy: 0.9801 - val_loss: 0.4214 - val_accuracy: 0.8903 Epoch 13/50 448/448 [==============================] - 4s 8ms/step - loss: 0.0451 - accuracy: 0.9861 - val_loss: 0.2710 - val_accuracy: 0.9203 Epoch 14/50 448/448 [==============================] - 4s 8ms/step - loss: 0.0474 - accuracy: 0.9843 - val_loss: 0.3034 - val_accuracy: 0.9140 Epoch 15/50 448/448 [==============================] - 4s 8ms/step - loss: 0.0450 - accuracy: 0.9860 - val_loss: 0.3441 - val_accuracy: 0.9117 Epoch 16/50 448/448 [==============================] - 4s 8ms/step - loss: 0.0452 - accuracy: 0.9847 - val_loss: 0.2638 - val_accuracy: 0.9293 Epoch 17/50 448/448 [==============================] - 4s 8ms/step - loss: 0.0361 - accuracy: 0.9883 - val_loss: 0.2768 - val_accuracy: 0.9320 Epoch 18/50 448/448 [==============================] - 4s 9ms/step - loss: 0.0235 - accuracy: 0.9932 - val_loss: 0.3281 - val_accuracy: 0.9273 Epoch 19/50 448/448 [==============================] - 4s 9ms/step - loss: 0.0257 - accuracy: 0.9927 - val_loss: 0.5121 - val_accuracy: 0.9037 Epoch 20/50 448/448 [==============================] - 4s 8ms/step - loss: 0.0211 - accuracy: 0.9939 - val_loss: 0.2444 - val_accuracy: 0.9397 Epoch 21/50 448/448 [==============================] - 4s 8ms/step - loss: 0.0241 - accuracy: 0.9913 - val_loss: 0.3246 - val_accuracy: 0.9207 Epoch 22/50 448/448 [==============================] - 4s 8ms/step - loss: 0.0145 - accuracy: 0.9953 - val_loss: 0.2409 - val_accuracy: 0.9440 Epoch 23/50 448/448 [==============================] - 4s 8ms/step - loss: 0.0150 - accuracy: 0.9960 - val_loss: 0.2732 - val_accuracy: 0.9460 Epoch 24/50 448/448 [==============================] - 4s 8ms/step - loss: 0.0121 - accuracy: 0.9964 - val_loss: 0.4000 - val_accuracy: 0.9107 Epoch 25/50 448/448 [==============================] - 4s 9ms/step - loss: 0.0195 - accuracy: 0.9939 - val_loss: 0.3150 - val_accuracy: 0.9267 Epoch 26/50 448/448 [==============================] - 4s 9ms/step - loss: 0.0156 - accuracy: 0.9948 - val_loss: 0.2237 - val_accuracy: 0.9453 Epoch 27/50 448/448 [==============================] - 4s 9ms/step - loss: 0.0152 - accuracy: 0.9948 - val_loss: 0.3287 - val_accuracy: 0.9337 Epoch 28/50 448/448 [==============================] - 4s 9ms/step - loss: 0.0231 - accuracy: 0.9927 - val_loss: 0.3860 - val_accuracy: 0.9180 Epoch 29/50 448/448 [==============================] - 4s 9ms/step - loss: 0.0183 - accuracy: 0.9943 - val_loss: 0.3750 - val_accuracy: 0.9220 Epoch 30/50 448/448 [==============================] - 4s 9ms/step - loss: 0.0123 - accuracy: 0.9964 - val_loss: 0.2496 - val_accuracy: 0.9473 Epoch 31/50 448/448 [==============================] - 4s 9ms/step - loss: 0.0100 - accuracy: 0.9973 - val_loss: 0.4532 - val_accuracy: 0.8967 Epoch 32/50 448/448 [==============================] - 4s 8ms/step - loss: 0.0069 - accuracy: 0.9974 - val_loss: 0.2253 - val_accuracy: 0.9450 Epoch 33/50 448/448 [==============================] - 4s 8ms/step - loss: 0.0043 - accuracy: 0.9987 - val_loss: 0.2850 - val_accuracy: 0.9340 Epoch 34/50 448/448 [==============================] - 4s 9ms/step - loss: 0.0087 - accuracy: 0.9971 - val_loss: 0.2521 - val_accuracy: 0.9430 Epoch 35/50 448/448 [==============================] - 4s 8ms/step - loss: 0.0088 - accuracy: 0.9972 - val_loss: 0.3210 - val_accuracy: 0.9363 Epoch 36/50 448/448 [==============================] - 4s 8ms/step - loss: 0.0095 - accuracy: 0.9968 - val_loss: 0.2092 - val_accuracy: 0.9517 Epoch 37/50 448/448 [==============================] - 4s 9ms/step - loss: 0.0075 - accuracy: 0.9976 - val_loss: 0.2427 - val_accuracy: 0.9490 Epoch 38/50 448/448 [==============================] - 4s 9ms/step - loss: 0.0084 - accuracy: 0.9971 - val_loss: 0.2229 - val_accuracy: 0.9490 Epoch 39/50 448/448 [==============================] - 4s 9ms/step - loss: 0.0047 - accuracy: 0.9988 - val_loss: 0.2086 - val_accuracy: 0.9557 Epoch 40/50 448/448 [==============================] - 4s 9ms/step - loss: 0.0043 - accuracy: 0.9987 - val_loss: 0.2154 - val_accuracy: 0.9493 Epoch 41/50 448/448 [==============================] - 4s 8ms/step - loss: 0.0120 - accuracy: 0.9965 - val_loss: 0.3081 - val_accuracy: 0.9350 Epoch 42/50 448/448 [==============================] - 4s 8ms/step - loss: 0.0160 - accuracy: 0.9955 - val_loss: 0.2797 - val_accuracy: 0.9443 Epoch 43/50 448/448 [==============================] - 4s 9ms/step - loss: 0.0062 - accuracy: 0.9986 - val_loss: 0.2743 - val_accuracy: 0.9520 Epoch 44/50 448/448 [==============================] - 4s 9ms/step - loss: 0.0080 - accuracy: 0.9975 - val_loss: 0.2837 - val_accuracy: 0.9460 Epoch 45/50 448/448 [==============================] - 4s 9ms/step - loss: 0.0060 - accuracy: 0.9981 - val_loss: 0.2512 - val_accuracy: 0.9520 Epoch 46/50 448/448 [==============================] - 4s 9ms/step - loss: 0.0046 - accuracy: 0.9985 - val_loss: 0.3277 - val_accuracy: 0.9360 Epoch 47/50 448/448 [==============================] - 4s 9ms/step - loss: 0.0036 - accuracy: 0.9992 - val_loss: 0.2839 - val_accuracy: 0.9457 Epoch 48/50 448/448 [==============================] - 4s 9ms/step - loss: 0.0050 - accuracy: 0.9983 - val_loss: 0.3458 - val_accuracy: 0.9340 Epoch 49/50 448/448 [==============================] - 4s 9ms/step - loss: 0.0076 - accuracy: 0.9976 - val_loss: 0.2483 - val_accuracy: 0.9490
EXTRACT THE TRAINING HISTORY OF THE CUSTOMVGG MODEL INTO A DICTIONARY
.history to extract the training data from the base model.pd.concat() to display the results.customVGGAugmentedHistory_31 = customVGGAugmentedHistory_31.history
# Get model results
result_df = compile_results(customVGGAugmentedHistory_31, "customVGGModel_31Augmented", 32, result_df)
display(result_df.iloc[12])
Model Name customVGGModel_31Augmented Epochs 49 Batch Size 32 Train Loss 0.004733 Val Loss 0.208608 Train Acc 0.998813 Val Acc 0.955667 [Train - Val] Acc 0.043147 Name: 12, dtype: object
PLOTTING THE LOSS AND ACCURACY CURVE
# Plot the loss and accuracy curve
plot_loss_curve(customVGGAugmentedHistory_31)
plt.show()
IMAGE SIZE : 128 X 128 PX
# Building the CustomVGG Model - With Data Augmentation
# Clear the previous tensorflow session
tf.keras.backend.clear_session()
# Define the input shape and create the input tensor
input_shape = (X_train_128.shape[1], X_train_128.shape[2],X_train_128.shape[3])
inputs = Input(shape=input_shape)
# Building the model
x = vgg_block(2, 32)(inputs) # Use less filters compared to VGG16
x = vgg_block(2, 64)(x)
x = vgg_block(2, 256)(x)
x = GlobalAveragePooling2D()(x)
x = Dense(256, 'relu')(x)
x = Dropout(0.3)(x)
x = Dense(NUM_CLASS, 'softmax')(x)
# Creating the model
customVGGModel_128Augmented = Model(inputs=inputs, outputs=x, name='CustomVGG_128Augmented')
# Compiling the model
customVGGModel_128Augmented.compile(optimizer=SGD(learning_rate=0.01, momentum=0.9), loss='categorical_crossentropy', metrics=['accuracy'])
customVGGModel_128Augmented.summary()
Model: "CustomVGG_128Augmented"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 128, 128, 1)] 0
sequential (Sequential) (None, 64, 64, 32) 9824
sequential_1 (Sequential) (None, 32, 32, 64) 55936
sequential_2 (Sequential) (None, 16, 16, 256) 739840
global_average_pooling2d (G (None, 256) 0
lobalAveragePooling2D)
dense (Dense) (None, 256) 65792
dropout (Dropout) (None, 256) 0
dense_1 (Dense) (None, 15) 3855
=================================================================
Total params: 875,247
Trainable params: 873,839
Non-trainable params: 1,408
_________________________________________________________________
FITTING THE DATA TO THE CUSTOMVGG MODEL
Fit the data using model.fit. Apply early stopping to the model as well to try and prevent overfitting of data.
customVGGAugmentedHistory_128 = customVGGModel_128Augmented.fit(X_train_128_aug, y_train_128, epochs=50, validation_data=(X_val_128, y_val_128), batch_size=32, callbacks=EarlyStopping(monitor='val_accuracy', patience=10, restore_best_weights=True))
Epoch 1/50 448/448 [==============================] - 15s 31ms/step - loss: 1.5656 - accuracy: 0.4917 - val_loss: 3.0141 - val_accuracy: 0.3297 Epoch 2/50 448/448 [==============================] - 14s 31ms/step - loss: 0.7184 - accuracy: 0.7652 - val_loss: 1.8057 - val_accuracy: 0.5420 Epoch 3/50 448/448 [==============================] - 14s 31ms/step - loss: 0.4340 - accuracy: 0.8628 - val_loss: 1.7334 - val_accuracy: 0.6053 Epoch 4/50 448/448 [==============================] - 14s 31ms/step - loss: 0.2911 - accuracy: 0.9117 - val_loss: 0.7363 - val_accuracy: 0.7757 Epoch 5/50 448/448 [==============================] - 14s 31ms/step - loss: 0.1984 - accuracy: 0.9395 - val_loss: 1.0981 - val_accuracy: 0.7917 Epoch 6/50 448/448 [==============================] - 14s 31ms/step - loss: 0.1762 - accuracy: 0.9454 - val_loss: 0.6095 - val_accuracy: 0.8160 Epoch 7/50 448/448 [==============================] - 14s 31ms/step - loss: 0.1181 - accuracy: 0.9647 - val_loss: 1.6285 - val_accuracy: 0.6990 Epoch 8/50 448/448 [==============================] - 14s 31ms/step - loss: 0.0955 - accuracy: 0.9724 - val_loss: 0.1196 - val_accuracy: 0.9677 Epoch 9/50 448/448 [==============================] - 14s 31ms/step - loss: 0.0808 - accuracy: 0.9761 - val_loss: 0.1448 - val_accuracy: 0.9587 Epoch 10/50 448/448 [==============================] - 14s 31ms/step - loss: 0.0589 - accuracy: 0.9826 - val_loss: 0.1992 - val_accuracy: 0.9403 Epoch 11/50 448/448 [==============================] - 14s 31ms/step - loss: 0.0525 - accuracy: 0.9847 - val_loss: 1.5195 - val_accuracy: 0.7330 Epoch 12/50 448/448 [==============================] - 14s 31ms/step - loss: 0.0480 - accuracy: 0.9862 - val_loss: 0.1494 - val_accuracy: 0.9567 Epoch 13/50 448/448 [==============================] - 14s 31ms/step - loss: 0.0330 - accuracy: 0.9915 - val_loss: 0.0420 - val_accuracy: 0.9880 Epoch 14/50 448/448 [==============================] - 14s 32ms/step - loss: 0.0278 - accuracy: 0.9914 - val_loss: 0.0716 - val_accuracy: 0.9820 Epoch 15/50 448/448 [==============================] - 14s 32ms/step - loss: 0.0300 - accuracy: 0.9901 - val_loss: 0.1004 - val_accuracy: 0.9723 Epoch 16/50 448/448 [==============================] - 14s 31ms/step - loss: 0.0325 - accuracy: 0.9909 - val_loss: 0.0554 - val_accuracy: 0.9810 Epoch 17/50 448/448 [==============================] - 14s 30ms/step - loss: 0.0187 - accuracy: 0.9950 - val_loss: 0.0604 - val_accuracy: 0.9850 Epoch 18/50 448/448 [==============================] - 13s 30ms/step - loss: 0.0245 - accuracy: 0.9920 - val_loss: 0.2591 - val_accuracy: 0.9297 Epoch 19/50 448/448 [==============================] - 13s 30ms/step - loss: 0.0221 - accuracy: 0.9927 - val_loss: 0.0655 - val_accuracy: 0.9790 Epoch 20/50 448/448 [==============================] - 13s 30ms/step - loss: 0.0151 - accuracy: 0.9954 - val_loss: 0.4507 - val_accuracy: 0.9043 Epoch 21/50 448/448 [==============================] - 13s 30ms/step - loss: 0.0092 - accuracy: 0.9978 - val_loss: 0.1572 - val_accuracy: 0.9647 Epoch 22/50 448/448 [==============================] - 13s 30ms/step - loss: 0.0138 - accuracy: 0.9963 - val_loss: 0.0590 - val_accuracy: 0.9843 Epoch 23/50 448/448 [==============================] - 14s 31ms/step - loss: 0.0118 - accuracy: 0.9969 - val_loss: 0.1541 - val_accuracy: 0.9613
EXTRACT THE TRAINING HISTORY OF THE CUSTOMVGG MODEL INTO A DICTIONARY
.history to extract the training data from the CustomVGG model.pd.concat() to display the results.customVGGAugmentedHistory_128 = customVGGAugmentedHistory_128.history
# Get model results
result_df = compile_results(customVGGAugmentedHistory_128, "customVGG_128Augmented", 32, result_df)
display(result_df.iloc[13])
Model Name customVGG_128Augmented Epochs 23 Batch Size 32 Train Loss 0.032988 Val Loss 0.041952 Train Acc 0.991483 Val Acc 0.988 [Train - Val] Acc 0.003483 Name: 13, dtype: object
PLOTTING THE LOSS AND ACCURACY CURVE
# Plot the loss and accuracy curve
plot_loss_curve(customVGGAugmentedHistory_128)
plt.show()
From the CustomVGG model for both image sizes,
Image Size with Higher Validation Accuracy & Lower Loss : 128 X 128 PX Images + Non-Augmented Data + No Regularization
Regularized Vs Non-Regularized Data : Non-Regularized Data managed to achieve higher training and validation accuracies overall, hence we can conclude tha
Augmented Vs Non-Augmented Data : Non-Augmented Data performed better for this model as well for its respective image sizes, consistently proving that augmentation barely helps the model in generalizing better to unseen data.
The AlexNet model (CUSTOM), adapted for smaller grayscale images, features a simplified architecture tailored to the specific needs of the 31x31 and 128x128 pixel dataset.
(31, 31, 1) and (128, 128, 1).
categorical_crossentropy, suitable for multi-class classification tasks. The model's performance is evaluated using the accuracy metric.This model is a streamlined version of the traditional AlexNet, optimized for the simpler task of classifying small grayscale images, and is expected to offer a better balance between efficiency and accuracy.
To train the CustomAlexNet model, we will first use our unaugmented data to train and fit the model. For this model, we will continue testing with both image sizes and augmented V.S. unaugmented data.
IMAGE SIZE : 31 X 31 PX
# Clear the previous tensorflow session
tf.keras.backend.clear_session()
# Define the input shape and create the input tensor
input_shape = (X_train_31.shape[1], X_train_31.shape[2], X_train_31.shape[3])
inputs = Input(shape=input_shape)
# AlexNet-like model adapted for 31x31x1 images
x = Conv2D(32, (3, 3), activation='relu', padding='same')(inputs)
x = BatchNormalization()(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Conv2D(64, (3, 3), activation='relu', padding='same')(x)
x = BatchNormalization()(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Conv2D(128, (3, 3), activation='relu', padding='same')(x)
x = BatchNormalization()(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Flatten()(x)
x = Dense(256, activation='relu')(x)
x = Dropout(0.5)(x)
x = Dense(NUM_CLASS, activation='softmax')(x)
# Creating the model
customAlexNetModel_31 = Model(inputs=inputs, outputs=x, name='CustomAlexNet_31')
# Compiling the model
customAlexNetModel_31.compile(optimizer=SGD(learning_rate=0.01, momentum=0.9), loss='categorical_crossentropy', metrics=['accuracy'])
# Model Summary
customAlexNetModel_31.summary()
Model: "CustomAlexNet_31"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 31, 31, 1)] 0
conv2d (Conv2D) (None, 31, 31, 32) 320
batch_normalization (BatchN (None, 31, 31, 32) 128
ormalization)
max_pooling2d (MaxPooling2D (None, 15, 15, 32) 0
)
conv2d_1 (Conv2D) (None, 15, 15, 64) 18496
batch_normalization_1 (Batc (None, 15, 15, 64) 256
hNormalization)
max_pooling2d_1 (MaxPooling (None, 7, 7, 64) 0
2D)
conv2d_2 (Conv2D) (None, 7, 7, 128) 73856
batch_normalization_2 (Batc (None, 7, 7, 128) 512
hNormalization)
max_pooling2d_2 (MaxPooling (None, 3, 3, 128) 0
2D)
flatten (Flatten) (None, 1152) 0
dense (Dense) (None, 256) 295168
dropout (Dropout) (None, 256) 0
dense_1 (Dense) (None, 15) 3855
=================================================================
Total params: 392,591
Trainable params: 392,143
Non-trainable params: 448
_________________________________________________________________
FITTING THE DATA TO THE CUSTOMALEXNET MODEL
Fit the data using model.fit. Apply early stopping to the model as well to try and prevent overfitting of data.
customAlexNet31History = customAlexNetModel_31.fit(X_train_31, y_train_31, epochs=50, validation_data=(X_val_31, y_val_31), batch_size=32, callbacks=EarlyStopping(monitor='val_accuracy', patience=10, restore_best_weights=True))
Epoch 1/50 448/448 [==============================] - 3s 6ms/step - loss: 1.8808 - accuracy: 0.4166 - val_loss: 1.6473 - val_accuracy: 0.4973 Epoch 2/50 448/448 [==============================] - 3s 6ms/step - loss: 1.1882 - accuracy: 0.6170 - val_loss: 2.2161 - val_accuracy: 0.4340 Epoch 3/50 448/448 [==============================] - 3s 6ms/step - loss: 0.8814 - accuracy: 0.7121 - val_loss: 1.8290 - val_accuracy: 0.5427 Epoch 4/50 448/448 [==============================] - 3s 6ms/step - loss: 0.6421 - accuracy: 0.7915 - val_loss: 1.0720 - val_accuracy: 0.7180 Epoch 5/50 448/448 [==============================] - 3s 6ms/step - loss: 0.5102 - accuracy: 0.8336 - val_loss: 0.4696 - val_accuracy: 0.8593 Epoch 6/50 448/448 [==============================] - 3s 6ms/step - loss: 0.4104 - accuracy: 0.8675 - val_loss: 0.7938 - val_accuracy: 0.7717 Epoch 7/50 448/448 [==============================] - 3s 6ms/step - loss: 0.3299 - accuracy: 0.8914 - val_loss: 0.5612 - val_accuracy: 0.8530 Epoch 8/50 448/448 [==============================] - 3s 6ms/step - loss: 0.2952 - accuracy: 0.9015 - val_loss: 1.5202 - val_accuracy: 0.7347 Epoch 9/50 448/448 [==============================] - 3s 6ms/step - loss: 0.2546 - accuracy: 0.9148 - val_loss: 0.4692 - val_accuracy: 0.8697 Epoch 10/50 448/448 [==============================] - 3s 6ms/step - loss: 0.2135 - accuracy: 0.9308 - val_loss: 0.7161 - val_accuracy: 0.8197 Epoch 11/50 448/448 [==============================] - 3s 6ms/step - loss: 0.1936 - accuracy: 0.9358 - val_loss: 0.4612 - val_accuracy: 0.8860 Epoch 12/50 448/448 [==============================] - 3s 6ms/step - loss: 0.1600 - accuracy: 0.9473 - val_loss: 0.4320 - val_accuracy: 0.8957 Epoch 13/50 448/448 [==============================] - 3s 6ms/step - loss: 0.1495 - accuracy: 0.9507 - val_loss: 0.4825 - val_accuracy: 0.8830 Epoch 14/50 448/448 [==============================] - 3s 6ms/step - loss: 0.1303 - accuracy: 0.9564 - val_loss: 0.7781 - val_accuracy: 0.8330 Epoch 15/50 448/448 [==============================] - 3s 6ms/step - loss: 0.1172 - accuracy: 0.9616 - val_loss: 0.9027 - val_accuracy: 0.8140 Epoch 16/50 448/448 [==============================] - 3s 6ms/step - loss: 0.1158 - accuracy: 0.9608 - val_loss: 0.4386 - val_accuracy: 0.8997 Epoch 17/50 448/448 [==============================] - 3s 6ms/step - loss: 0.0956 - accuracy: 0.9677 - val_loss: 1.3603 - val_accuracy: 0.7857 Epoch 18/50 448/448 [==============================] - 3s 6ms/step - loss: 0.0923 - accuracy: 0.9719 - val_loss: 0.4736 - val_accuracy: 0.8897 Epoch 19/50 448/448 [==============================] - 3s 6ms/step - loss: 0.0884 - accuracy: 0.9712 - val_loss: 0.3953 - val_accuracy: 0.9053 Epoch 20/50 448/448 [==============================] - 3s 6ms/step - loss: 0.0905 - accuracy: 0.9692 - val_loss: 0.3562 - val_accuracy: 0.9197 Epoch 21/50 448/448 [==============================] - 3s 6ms/step - loss: 0.0754 - accuracy: 0.9750 - val_loss: 0.3966 - val_accuracy: 0.9177 Epoch 22/50 448/448 [==============================] - 3s 6ms/step - loss: 0.0729 - accuracy: 0.9744 - val_loss: 0.5248 - val_accuracy: 0.8960 Epoch 23/50 448/448 [==============================] - 3s 6ms/step - loss: 0.0684 - accuracy: 0.9764 - val_loss: 1.3416 - val_accuracy: 0.7927 Epoch 24/50 448/448 [==============================] - 3s 6ms/step - loss: 0.0715 - accuracy: 0.9754 - val_loss: 2.9105 - val_accuracy: 0.6687 Epoch 25/50 448/448 [==============================] - 3s 6ms/step - loss: 0.0635 - accuracy: 0.9781 - val_loss: 0.3827 - val_accuracy: 0.9220 Epoch 26/50 448/448 [==============================] - 3s 6ms/step - loss: 0.0566 - accuracy: 0.9809 - val_loss: 0.4194 - val_accuracy: 0.9110 Epoch 27/50 448/448 [==============================] - 3s 6ms/step - loss: 0.0555 - accuracy: 0.9824 - val_loss: 1.2168 - val_accuracy: 0.8157 Epoch 28/50 448/448 [==============================] - 3s 6ms/step - loss: 0.0565 - accuracy: 0.9816 - val_loss: 3.9821 - val_accuracy: 0.6437 Epoch 29/50 448/448 [==============================] - 3s 6ms/step - loss: 0.0530 - accuracy: 0.9818 - val_loss: 0.3722 - val_accuracy: 0.9217 Epoch 30/50 448/448 [==============================] - 3s 6ms/step - loss: 0.0510 - accuracy: 0.9828 - val_loss: 0.7712 - val_accuracy: 0.8587 Epoch 31/50 448/448 [==============================] - 3s 6ms/step - loss: 0.0496 - accuracy: 0.9841 - val_loss: 0.5421 - val_accuracy: 0.9073 Epoch 32/50 448/448 [==============================] - 3s 6ms/step - loss: 0.0489 - accuracy: 0.9840 - val_loss: 0.4481 - val_accuracy: 0.9177 Epoch 33/50 448/448 [==============================] - 3s 6ms/step - loss: 0.0475 - accuracy: 0.9844 - val_loss: 0.8504 - val_accuracy: 0.8447 Epoch 34/50 448/448 [==============================] - 3s 6ms/step - loss: 0.0456 - accuracy: 0.9849 - val_loss: 0.6065 - val_accuracy: 0.8917 Epoch 35/50 448/448 [==============================] - 3s 6ms/step - loss: 0.0388 - accuracy: 0.9869 - val_loss: 0.5965 - val_accuracy: 0.8800
EXTRACT THE TRAINING HISTORY OF THE CUSTOMALEXNET MODEL INTO A DICTIONARY
.history to extract the training data from the AlexNet model.pd.concat() to display the results.customAlexNet31History = customAlexNet31History.history
# Get model results
result_df = compile_results(customAlexNet31History, "customAlexNet_31", 32, result_df)
display(result_df.iloc[14])
Model Name customAlexNet_31 Epochs 35 Batch Size 32 Train Loss 0.063484 Val Loss 0.382737 Train Acc 0.97808 Val Acc 0.922 [Train - Val] Acc 0.05608 Name: 14, dtype: object
PLOTTING THE LOSS AND ACCURACY CURVE
# Plot the loss and accuracy curve
plot_loss_curve(customAlexNet31History)
plt.show()
IMAGE SIZE : 128 X 128 PX
# Clear the previous tensorflow session
tf.keras.backend.clear_session()
# Define the input shape and create the input tensor
input_shape = (X_train_128.shape[1], X_train_128.shape[2], X_train_128.shape[3])
inputs = Input(shape=input_shape)
# AlexNet-like model adapted for 128x128x1 images
x = Conv2D(64, (11, 11), strides=(4, 4), activation='relu', padding='same')(inputs)
x = BatchNormalization()(x)
x = MaxPooling2D(pool_size=(3, 3), strides=(2, 2))(x)
x = Conv2D(192, (5, 5), activation='relu', padding='same')(x)
x = BatchNormalization()(x)
x = MaxPooling2D(pool_size=(3, 3), strides=(2, 2))(x)
x = Conv2D(384, (3, 3), activation='relu', padding='same')(x)
x = BatchNormalization()(x)
x = Conv2D(256, (3, 3), activation='relu', padding='same')(x)
x = BatchNormalization()(x)
x = Conv2D(256, (3, 3), activation='relu', padding='same')(x)
x = BatchNormalization()(x)
x = MaxPooling2D(pool_size=(3, 3), strides=(2, 2))(x)
x = Flatten()(x)
x = Dense(4096, activation='relu')(x)
x = Dropout(0.5)(x)
x = Dense(4096, activation='relu')(x)
x = Dropout(0.5)(x)
x = Dense(NUM_CLASS, activation='softmax')(x) # Update NUM_CLASS as per your dataset
# Creating the model
customAlexNetModel_128 = Model(inputs=inputs, outputs=x, name='CustomAlexNet_128')
# Compiling the model
customAlexNetModel_128.compile(optimizer=SGD(learning_rate=0.01, momentum=0.9), loss='categorical_crossentropy', metrics=['accuracy'])
# Model Summary
customAlexNetModel_128.summary()
Model: "CustomAlexNet_128"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 128, 128, 1)] 0
conv2d (Conv2D) (None, 32, 32, 64) 7808
batch_normalization (BatchN (None, 32, 32, 64) 256
ormalization)
max_pooling2d (MaxPooling2D (None, 15, 15, 64) 0
)
conv2d_1 (Conv2D) (None, 15, 15, 192) 307392
batch_normalization_1 (Batc (None, 15, 15, 192) 768
hNormalization)
max_pooling2d_1 (MaxPooling (None, 7, 7, 192) 0
2D)
conv2d_2 (Conv2D) (None, 7, 7, 384) 663936
batch_normalization_2 (Batc (None, 7, 7, 384) 1536
hNormalization)
conv2d_3 (Conv2D) (None, 7, 7, 256) 884992
batch_normalization_3 (Batc (None, 7, 7, 256) 1024
hNormalization)
conv2d_4 (Conv2D) (None, 7, 7, 256) 590080
batch_normalization_4 (Batc (None, 7, 7, 256) 1024
hNormalization)
max_pooling2d_2 (MaxPooling (None, 3, 3, 256) 0
2D)
flatten (Flatten) (None, 2304) 0
dense (Dense) (None, 4096) 9441280
dropout (Dropout) (None, 4096) 0
dense_1 (Dense) (None, 4096) 16781312
dropout_1 (Dropout) (None, 4096) 0
dense_2 (Dense) (None, 15) 61455
=================================================================
Total params: 28,742,863
Trainable params: 28,740,559
Non-trainable params: 2,304
_________________________________________________________________
FITTING THE DATA TO THE CUSTOMALEXNET MODEL
Fit the data using model.fit. Apply early stopping to the model as well to try and prevent overfitting of data.
customAlexNet128History = customAlexNetModel_128.fit(X_train_128, y_train_128, epochs=50, validation_data=(X_val_128, y_val_128), batch_size=32, callbacks=EarlyStopping(monitor='val_accuracy', patience=10, restore_best_weights=True))
Epoch 1/50 448/448 [==============================] - 7s 13ms/step - loss: 2.9233 - accuracy: 0.2636 - val_loss: 1.9747 - val_accuracy: 0.3697 Epoch 2/50 448/448 [==============================] - 5s 12ms/step - loss: 1.6290 - accuracy: 0.5017 - val_loss: 1.4045 - val_accuracy: 0.5687 Epoch 3/50 448/448 [==============================] - 5s 12ms/step - loss: 1.0821 - accuracy: 0.6723 - val_loss: 0.8384 - val_accuracy: 0.7400 Epoch 4/50 448/448 [==============================] - 5s 11ms/step - loss: 0.7566 - accuracy: 0.7703 - val_loss: 5.3130 - val_accuracy: 0.2160 Epoch 5/50 448/448 [==============================] - 5s 11ms/step - loss: 0.5436 - accuracy: 0.8309 - val_loss: 0.8778 - val_accuracy: 0.7347 Epoch 6/50 448/448 [==============================] - 5s 11ms/step - loss: 0.4173 - accuracy: 0.8696 - val_loss: 4.3941 - val_accuracy: 0.3263 Epoch 7/50 448/448 [==============================] - 5s 11ms/step - loss: 0.2981 - accuracy: 0.9084 - val_loss: 0.7188 - val_accuracy: 0.8060 Epoch 8/50 448/448 [==============================] - 5s 11ms/step - loss: 0.2620 - accuracy: 0.9183 - val_loss: 1.6995 - val_accuracy: 0.6273 Epoch 9/50 448/448 [==============================] - 5s 11ms/step - loss: 0.2050 - accuracy: 0.9360 - val_loss: 0.7902 - val_accuracy: 0.8093 Epoch 10/50 448/448 [==============================] - 5s 11ms/step - loss: 0.1708 - accuracy: 0.9450 - val_loss: 0.4053 - val_accuracy: 0.8810 Epoch 11/50 448/448 [==============================] - 5s 11ms/step - loss: 0.1257 - accuracy: 0.9618 - val_loss: 0.5924 - val_accuracy: 0.8487 Epoch 12/50 448/448 [==============================] - 5s 11ms/step - loss: 0.1107 - accuracy: 0.9632 - val_loss: 0.4484 - val_accuracy: 0.8813 Epoch 13/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0986 - accuracy: 0.9680 - val_loss: 2.6474 - val_accuracy: 0.6287 Epoch 14/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0818 - accuracy: 0.9755 - val_loss: 4.0360 - val_accuracy: 0.3733 Epoch 15/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0767 - accuracy: 0.9768 - val_loss: 0.3617 - val_accuracy: 0.9073 Epoch 16/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0692 - accuracy: 0.9799 - val_loss: 0.3008 - val_accuracy: 0.9250 Epoch 17/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0525 - accuracy: 0.9848 - val_loss: 0.6491 - val_accuracy: 0.8650 Epoch 18/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0504 - accuracy: 0.9840 - val_loss: 1.7028 - val_accuracy: 0.7197 Epoch 19/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0479 - accuracy: 0.9850 - val_loss: 0.5882 - val_accuracy: 0.8753 Epoch 20/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0445 - accuracy: 0.9864 - val_loss: 0.3780 - val_accuracy: 0.9127 Epoch 21/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0391 - accuracy: 0.9876 - val_loss: 0.3401 - val_accuracy: 0.9247 Epoch 22/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0280 - accuracy: 0.9926 - val_loss: 0.9728 - val_accuracy: 0.8147 Epoch 23/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0355 - accuracy: 0.9906 - val_loss: 0.2924 - val_accuracy: 0.9330 Epoch 24/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0314 - accuracy: 0.9909 - val_loss: 0.3805 - val_accuracy: 0.9213 Epoch 25/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0299 - accuracy: 0.9912 - val_loss: 2.2367 - val_accuracy: 0.7007 Epoch 26/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0337 - accuracy: 0.9898 - val_loss: 5.4306 - val_accuracy: 0.4113 Epoch 27/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0273 - accuracy: 0.9925 - val_loss: 2.1531 - val_accuracy: 0.6997 Epoch 28/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0317 - accuracy: 0.9909 - val_loss: 0.5267 - val_accuracy: 0.8937 Epoch 29/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0175 - accuracy: 0.9943 - val_loss: 0.7310 - val_accuracy: 0.8637 Epoch 30/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0224 - accuracy: 0.9926 - val_loss: 0.2386 - val_accuracy: 0.9513 Epoch 31/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0115 - accuracy: 0.9963 - val_loss: 0.2375 - val_accuracy: 0.9450 Epoch 32/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0152 - accuracy: 0.9957 - val_loss: 0.5592 - val_accuracy: 0.8857 Epoch 33/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0222 - accuracy: 0.9930 - val_loss: 3.9177 - val_accuracy: 0.5710 Epoch 34/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0159 - accuracy: 0.9953 - val_loss: 0.2565 - val_accuracy: 0.9487 Epoch 35/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0214 - accuracy: 0.9934 - val_loss: 0.2684 - val_accuracy: 0.9420 Epoch 36/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0156 - accuracy: 0.9956 - val_loss: 0.2327 - val_accuracy: 0.9447 Epoch 37/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0154 - accuracy: 0.9953 - val_loss: 0.7533 - val_accuracy: 0.8607 Epoch 38/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0195 - accuracy: 0.9952 - val_loss: 0.6401 - val_accuracy: 0.8730 Epoch 39/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0133 - accuracy: 0.9966 - val_loss: 0.6667 - val_accuracy: 0.8673 Epoch 40/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0093 - accuracy: 0.9972 - val_loss: 2.5609 - val_accuracy: 0.6533
EXTRACT THE TRAINING HISTORY OF THE CUSTOMALEXNET MODEL INTO A DICTIONARY
.history to extract the training data from the AlexNet model.pd.concat() to display the results.customAlexNet128History = customAlexNet128History.history
# Get model results
result_df = compile_results(customAlexNet128History, "customAlexNetModel_128", 32, result_df)
display(result_df.iloc[15])
Model Name customAlexNetModel_128 Epochs 40 Batch Size 32 Train Loss 0.022377 Val Loss 0.238607 Train Acc 0.9926 Val Acc 0.951333 [Train - Val] Acc 0.041267 Name: 15, dtype: object
PLOTTING THE LOSS AND ACCURACY CURVE
# Plot the loss and accuracy curve
plot_loss_curve(customAlexNet128History)
plt.show()
IMAGE SIZE : 31 X 31 PX
# Clear the previous tensorflow session
tf.keras.backend.clear_session()
# Define the input shape and create the input tensor
input_shape = (X_train_31.shape[1], X_train_31.shape[2], X_train_31.shape[3])
inputs = Input(shape=input_shape)
# AlexNet-like model adapted for 31x31x1 images
x = Conv2D(32, (3, 3), activation='relu', padding='same')(inputs)
x = BatchNormalization()(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Conv2D(64, (3, 3), activation='relu', padding='same')(x)
x = BatchNormalization()(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Conv2D(128, (3, 3), activation='relu', padding='same')(x)
x = BatchNormalization()(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Flatten()(x)
x = Dense(256, activation='relu')(x)
x = Dropout(0.5)(x)
x = Dense(NUM_CLASS, activation='softmax')(x)
# Creating the model
customAlexNetModel_31Augmented = Model(inputs=inputs, outputs=x, name='CustomAlexNet_31Augmented')
# Compiling the model
customAlexNetModel_31Augmented.compile(optimizer=SGD(learning_rate=0.01, momentum=0.9), loss='categorical_crossentropy', metrics=['accuracy'])
# Model Summary
customAlexNetModel_31Augmented.summary()
Model: "CustomAlexNet_31Augmented"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 31, 31, 1)] 0
conv2d (Conv2D) (None, 31, 31, 32) 320
batch_normalization (BatchN (None, 31, 31, 32) 128
ormalization)
max_pooling2d (MaxPooling2D (None, 15, 15, 32) 0
)
conv2d_1 (Conv2D) (None, 15, 15, 64) 18496
batch_normalization_1 (Batc (None, 15, 15, 64) 256
hNormalization)
max_pooling2d_1 (MaxPooling (None, 7, 7, 64) 0
2D)
conv2d_2 (Conv2D) (None, 7, 7, 128) 73856
batch_normalization_2 (Batc (None, 7, 7, 128) 512
hNormalization)
max_pooling2d_2 (MaxPooling (None, 3, 3, 128) 0
2D)
flatten (Flatten) (None, 1152) 0
dense (Dense) (None, 256) 295168
dropout (Dropout) (None, 256) 0
dense_1 (Dense) (None, 15) 3855
=================================================================
Total params: 392,591
Trainable params: 392,143
Non-trainable params: 448
_________________________________________________________________
FITTING THE DATA TO THE CUSTOMALEXNET MODEL
Fit the data using model.fit. Apply early stopping to the model as well to try and prevent overfitting of data.
customAlexNet31AugmentedHistory = customAlexNetModel_31Augmented.fit(X_train_31_aug, y_train_31, epochs=50, validation_data=(X_val_31, y_val_31), batch_size=32, callbacks=EarlyStopping(monitor='val_accuracy', patience=10, restore_best_weights=True))
Epoch 1/50 448/448 [==============================] - 3s 6ms/step - loss: 2.2768 - accuracy: 0.2760 - val_loss: 1.7300 - val_accuracy: 0.4597 Epoch 2/50 448/448 [==============================] - 3s 6ms/step - loss: 1.7139 - accuracy: 0.4389 - val_loss: 1.5579 - val_accuracy: 0.4917 Epoch 3/50 448/448 [==============================] - 3s 6ms/step - loss: 1.3694 - accuracy: 0.5518 - val_loss: 3.4775 - val_accuracy: 0.3130 Epoch 4/50 448/448 [==============================] - 3s 6ms/step - loss: 1.1289 - accuracy: 0.6350 - val_loss: 2.3431 - val_accuracy: 0.4373 Epoch 5/50 448/448 [==============================] - 3s 6ms/step - loss: 0.9153 - accuracy: 0.6998 - val_loss: 0.7836 - val_accuracy: 0.7513 Epoch 6/50 448/448 [==============================] - 3s 6ms/step - loss: 0.7635 - accuracy: 0.7463 - val_loss: 0.6426 - val_accuracy: 0.8027 Epoch 7/50 448/448 [==============================] - 3s 6ms/step - loss: 0.6472 - accuracy: 0.7844 - val_loss: 0.5340 - val_accuracy: 0.8273 Epoch 8/50 448/448 [==============================] - 3s 6ms/step - loss: 0.5508 - accuracy: 0.8143 - val_loss: 2.2471 - val_accuracy: 0.5750 Epoch 9/50 448/448 [==============================] - 3s 6ms/step - loss: 0.4739 - accuracy: 0.8432 - val_loss: 0.5579 - val_accuracy: 0.8370 Epoch 10/50 448/448 [==============================] - 3s 6ms/step - loss: 0.3990 - accuracy: 0.8644 - val_loss: 0.4826 - val_accuracy: 0.8563 Epoch 11/50 448/448 [==============================] - 3s 6ms/step - loss: 0.3605 - accuracy: 0.8780 - val_loss: 0.8732 - val_accuracy: 0.7730 Epoch 12/50 448/448 [==============================] - 3s 6ms/step - loss: 0.3165 - accuracy: 0.8929 - val_loss: 0.5754 - val_accuracy: 0.8467 Epoch 13/50 448/448 [==============================] - 3s 6ms/step - loss: 0.2836 - accuracy: 0.9050 - val_loss: 0.5417 - val_accuracy: 0.8677 Epoch 14/50 448/448 [==============================] - 3s 6ms/step - loss: 0.2561 - accuracy: 0.9117 - val_loss: 2.6436 - val_accuracy: 0.5793 Epoch 15/50 448/448 [==============================] - 3s 6ms/step - loss: 0.2501 - accuracy: 0.9158 - val_loss: 0.5061 - val_accuracy: 0.8713 Epoch 16/50 448/448 [==============================] - 3s 6ms/step - loss: 0.2299 - accuracy: 0.9220 - val_loss: 0.9135 - val_accuracy: 0.7867 Epoch 17/50 448/448 [==============================] - 3s 6ms/step - loss: 0.2016 - accuracy: 0.9316 - val_loss: 0.4190 - val_accuracy: 0.8910 Epoch 18/50 448/448 [==============================] - 3s 6ms/step - loss: 0.1902 - accuracy: 0.9364 - val_loss: 0.4014 - val_accuracy: 0.8993 Epoch 19/50 448/448 [==============================] - 3s 6ms/step - loss: 0.1648 - accuracy: 0.9451 - val_loss: 0.5812 - val_accuracy: 0.8543 Epoch 20/50 448/448 [==============================] - 3s 6ms/step - loss: 0.1415 - accuracy: 0.9527 - val_loss: 0.6309 - val_accuracy: 0.8647 Epoch 21/50 448/448 [==============================] - 3s 6ms/step - loss: 0.1475 - accuracy: 0.9513 - val_loss: 1.2563 - val_accuracy: 0.7703 Epoch 22/50 448/448 [==============================] - 3s 6ms/step - loss: 0.1402 - accuracy: 0.9523 - val_loss: 0.5062 - val_accuracy: 0.8847 Epoch 23/50 448/448 [==============================] - 3s 6ms/step - loss: 0.1387 - accuracy: 0.9531 - val_loss: 0.4174 - val_accuracy: 0.8993 Epoch 24/50 448/448 [==============================] - 3s 6ms/step - loss: 0.1414 - accuracy: 0.9536 - val_loss: 0.9620 - val_accuracy: 0.8130 Epoch 25/50 448/448 [==============================] - 3s 6ms/step - loss: 0.1240 - accuracy: 0.9580 - val_loss: 1.1505 - val_accuracy: 0.7773 Epoch 26/50 448/448 [==============================] - 3s 6ms/step - loss: 0.1144 - accuracy: 0.9629 - val_loss: 0.5486 - val_accuracy: 0.8873 Epoch 27/50 448/448 [==============================] - 3s 6ms/step - loss: 0.1074 - accuracy: 0.9642 - val_loss: 0.9409 - val_accuracy: 0.8273 Epoch 28/50 448/448 [==============================] - 3s 6ms/step - loss: 0.1110 - accuracy: 0.9634 - val_loss: 0.7684 - val_accuracy: 0.8480
EXTRACT THE TRAINING HISTORY OF THE CUSTOMALEXNET MODEL INTO A DICTIONARY
.history to extract the training data from the AlexNet model.pd.concat() to display the results.customAlexNet31AugmentedHistory = customAlexNet31AugmentedHistory.history
# Get model results
result_df = compile_results(customAlexNet31AugmentedHistory, "customAlexNetModel_31Augmented", 32, result_df)
display(result_df.iloc[15])
Model Name customAlexNetModel_128 Epochs 40 Batch Size 32 Train Loss 0.022377 Val Loss 0.238607 Train Acc 0.9926 Val Acc 0.951333 [Train - Val] Acc 0.041267 Name: 15, dtype: object
PLOTTING THE LOSS AND ACCURACY CURVE
# Plot the loss and accuracy curve
plot_loss_curve(customAlexNet31AugmentedHistory)
plt.show()
IMAGE SIZE : 128 X 128 PX
# Clear the previous tensorflow session
tf.keras.backend.clear_session()
# Define the input shape and create the input tensor
input_shape = (X_train_128.shape[1], X_train_128.shape[2], X_train_128.shape[3])
inputs = Input(shape=input_shape)
# AlexNet-like model adapted for 128x128x1 images
x = Conv2D(64, (11, 11), strides=(4, 4), activation='relu', padding='same')(inputs)
x = BatchNormalization()(x)
x = MaxPooling2D(pool_size=(3, 3), strides=(2, 2))(x)
x = Conv2D(192, (5, 5), activation='relu', padding='same')(x)
x = BatchNormalization()(x)
x = MaxPooling2D(pool_size=(3, 3), strides=(2, 2))(x)
x = Conv2D(384, (3, 3), activation='relu', padding='same')(x)
x = BatchNormalization()(x)
x = Conv2D(256, (3, 3), activation='relu', padding='same')(x)
x = BatchNormalization()(x)
x = Conv2D(256, (3, 3), activation='relu', padding='same')(x)
x = BatchNormalization()(x)
x = MaxPooling2D(pool_size=(3, 3), strides=(2, 2))(x)
x = Flatten()(x)
x = Dense(4096, activation='relu')(x)
x = Dropout(0.5)(x)
x = Dense(4096, activation='relu')(x)
x = Dropout(0.5)(x)
x = Dense(NUM_CLASS, activation='softmax')(x) # Update NUM_CLASS as per your dataset
# Creating the model
customAlexNetModel_128Augmented = Model(inputs=inputs, outputs=x, name='CustomAlexNet_128Augmented')
# Compiling the model
customAlexNetModel_128Augmented.compile(optimizer=SGD(learning_rate=0.01, momentum=0.9), loss='categorical_crossentropy', metrics=['accuracy'])
# Model Summary
customAlexNetModel_128Augmented.summary()
Model: "CustomAlexNet_128Augmented"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 128, 128, 1)] 0
conv2d (Conv2D) (None, 32, 32, 64) 7808
batch_normalization (BatchN (None, 32, 32, 64) 256
ormalization)
max_pooling2d (MaxPooling2D (None, 15, 15, 64) 0
)
conv2d_1 (Conv2D) (None, 15, 15, 192) 307392
batch_normalization_1 (Batc (None, 15, 15, 192) 768
hNormalization)
max_pooling2d_1 (MaxPooling (None, 7, 7, 192) 0
2D)
conv2d_2 (Conv2D) (None, 7, 7, 384) 663936
batch_normalization_2 (Batc (None, 7, 7, 384) 1536
hNormalization)
conv2d_3 (Conv2D) (None, 7, 7, 256) 884992
batch_normalization_3 (Batc (None, 7, 7, 256) 1024
hNormalization)
conv2d_4 (Conv2D) (None, 7, 7, 256) 590080
batch_normalization_4 (Batc (None, 7, 7, 256) 1024
hNormalization)
max_pooling2d_2 (MaxPooling (None, 3, 3, 256) 0
2D)
flatten (Flatten) (None, 2304) 0
dense (Dense) (None, 4096) 9441280
dropout (Dropout) (None, 4096) 0
dense_1 (Dense) (None, 4096) 16781312
dropout_1 (Dropout) (None, 4096) 0
dense_2 (Dense) (None, 15) 61455
=================================================================
Total params: 28,742,863
Trainable params: 28,740,559
Non-trainable params: 2,304
_________________________________________________________________
FITTING THE DATA TO THE CUSTOMALEXNET MODEL
Fit the data using model.fit. Apply early stopping to the model as well to try and prevent overfitting of data.
customAlexNet128AugmentedHistory = customAlexNetModel_128Augmented.fit(X_train_128_aug, y_train_128, epochs=50, validation_data=(X_val_128, y_val_128), batch_size=32, callbacks=EarlyStopping(monitor='val_accuracy', patience=10, restore_best_weights=True))
Epoch 1/50 448/448 [==============================] - 6s 12ms/step - loss: 3.4509 - accuracy: 0.1626 - val_loss: 2.4841 - val_accuracy: 0.2067 Epoch 2/50 448/448 [==============================] - 5s 11ms/step - loss: 2.1487 - accuracy: 0.3170 - val_loss: 1.7935 - val_accuracy: 0.4100 Epoch 3/50 448/448 [==============================] - 5s 11ms/step - loss: 1.6539 - accuracy: 0.4644 - val_loss: 1.7927 - val_accuracy: 0.4240 Epoch 4/50 448/448 [==============================] - 5s 11ms/step - loss: 1.2758 - accuracy: 0.5890 - val_loss: 1.2823 - val_accuracy: 0.5963 Epoch 5/50 448/448 [==============================] - 5s 11ms/step - loss: 0.9781 - accuracy: 0.6852 - val_loss: 7.0295 - val_accuracy: 0.1033 Epoch 6/50 448/448 [==============================] - 5s 12ms/step - loss: 0.7636 - accuracy: 0.7561 - val_loss: 1.1108 - val_accuracy: 0.6883 Epoch 7/50 448/448 [==============================] - 5s 11ms/step - loss: 0.5870 - accuracy: 0.8139 - val_loss: 1.5014 - val_accuracy: 0.6387 Epoch 8/50 448/448 [==============================] - 5s 11ms/step - loss: 0.4488 - accuracy: 0.8586 - val_loss: 2.2141 - val_accuracy: 0.4880 Epoch 9/50 448/448 [==============================] - 5s 11ms/step - loss: 0.3646 - accuracy: 0.8843 - val_loss: 0.5074 - val_accuracy: 0.8400 Epoch 10/50 448/448 [==============================] - 5s 11ms/step - loss: 0.2834 - accuracy: 0.9111 - val_loss: 1.0873 - val_accuracy: 0.7317 Epoch 11/50 448/448 [==============================] - 5s 11ms/step - loss: 0.2459 - accuracy: 0.9252 - val_loss: 0.5932 - val_accuracy: 0.8160 Epoch 12/50 448/448 [==============================] - 5s 11ms/step - loss: 0.2120 - accuracy: 0.9341 - val_loss: 0.6199 - val_accuracy: 0.8217 Epoch 13/50 448/448 [==============================] - 5s 11ms/step - loss: 0.1670 - accuracy: 0.9460 - val_loss: 0.7084 - val_accuracy: 0.8267 Epoch 14/50 448/448 [==============================] - 5s 11ms/step - loss: 0.1436 - accuracy: 0.9546 - val_loss: 0.2373 - val_accuracy: 0.9317 Epoch 15/50 448/448 [==============================] - 5s 11ms/step - loss: 0.1173 - accuracy: 0.9639 - val_loss: 0.8821 - val_accuracy: 0.7703 Epoch 16/50 448/448 [==============================] - 5s 11ms/step - loss: 0.1045 - accuracy: 0.9669 - val_loss: 0.4946 - val_accuracy: 0.8600 Epoch 17/50 448/448 [==============================] - 5s 11ms/step - loss: 0.1074 - accuracy: 0.9667 - val_loss: 0.2958 - val_accuracy: 0.9240 Epoch 18/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0825 - accuracy: 0.9750 - val_loss: 0.3512 - val_accuracy: 0.9130 Epoch 19/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0710 - accuracy: 0.9774 - val_loss: 0.7603 - val_accuracy: 0.8370 Epoch 20/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0660 - accuracy: 0.9786 - val_loss: 0.3387 - val_accuracy: 0.9080 Epoch 21/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0699 - accuracy: 0.9785 - val_loss: 0.3242 - val_accuracy: 0.9213 Epoch 22/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0598 - accuracy: 0.9818 - val_loss: 0.6402 - val_accuracy: 0.8650 Epoch 23/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0580 - accuracy: 0.9816 - val_loss: 0.2003 - val_accuracy: 0.9507 Epoch 24/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0566 - accuracy: 0.9819 - val_loss: 1.7972 - val_accuracy: 0.6410 Epoch 25/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0466 - accuracy: 0.9860 - val_loss: 0.2040 - val_accuracy: 0.9453 Epoch 26/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0265 - accuracy: 0.9911 - val_loss: 0.3145 - val_accuracy: 0.9290 Epoch 27/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0407 - accuracy: 0.9874 - val_loss: 1.2748 - val_accuracy: 0.7687 Epoch 28/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0375 - accuracy: 0.9882 - val_loss: 0.4264 - val_accuracy: 0.8970 Epoch 29/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0330 - accuracy: 0.9884 - val_loss: 0.5460 - val_accuracy: 0.8863 Epoch 30/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0347 - accuracy: 0.9891 - val_loss: 1.2439 - val_accuracy: 0.7723 Epoch 31/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0359 - accuracy: 0.9885 - val_loss: 10.7443 - val_accuracy: 0.2453 Epoch 32/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0456 - accuracy: 0.9870 - val_loss: 1.0135 - val_accuracy: 0.8000 Epoch 33/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0242 - accuracy: 0.9925 - val_loss: 0.5460 - val_accuracy: 0.8793
EXTRACT THE TRAINING HISTORY OF THE CUSTOMALEXNET MODEL INTO A DICTIONARY
.history to extract the training data from the AlexNet model.pd.concat() to display the results.customAlexNet128AugmentedHistory = customAlexNet128AugmentedHistory.history
# Get model results
result_df = compile_results(customAlexNet128AugmentedHistory, "customAlexNetModel_128Augmented", 32, result_df)
display(result_df.iloc[17])
Model Name customAlexNetModel_128Augmented Epochs 33 Batch Size 32 Train Loss 0.058011 Val Loss 0.200295 Train Acc 0.981571 Val Acc 0.950667 [Train - Val] Acc 0.030904 Name: 17, dtype: object
PLOTTING THE LOSS AND ACCURACY CURVE
# Plot the loss and accuracy curve
plot_loss_curve(customAlexNet128AugmentedHistory)
plt.show()
From the CustomAlexNet model for both image sizes,
Image Size with Higher Accuracy & Lower Loss : 128 x 128 PX + Augmented
Augmented Vs Non-Augmented Data : For this model, we see that augmented data performed better for image size 128px but worse for image size 31px. This could indicate that the variability in augmented data helped the model generalize better by learning more robust feature representations.
However, as to why it performed worse on 31 px images, it could be because augmentation introduces more noise and distorts critical features, making it more difficult for the model to learn from key features in the images.
Note : This model does not reference / modify any existing architectures and is custom-made purely for this analysis.
To train the CustomNet model, we will first use our unaugmented data to train and fit the model. For this model, we will still proceed testing with both image sizes and augmented V.S. unaugmented data to find out key differences in performance.
# Defining functions to build models for both input sizes
# Building 31 x 31 px model
def custom_model_31(X_train, NUM_CLASS, model_name):
tf.keras.backend.clear_session()
input_shape = (X_train.shape[1], X_train.shape[2], X_train.shape[3])
inputs = Input(shape=input_shape)
x = Conv2D(16, (3, 3), padding='same', activation='relu')(inputs)
x = MaxPooling2D(2, 2)(x)
x = Conv2D(32, (3, 3), padding='same', activation='relu')(x)
x = MaxPooling2D(2, 2)(x)
x = Conv2D(64, (3, 3), padding='same', activation='relu')(x)
x = MaxPooling2D(2, 2)(x)
x = Flatten()(x)
x = Dropout(0.5)(x)
x = Dense(128, activation='relu')(x)
x = Dense(NUM_CLASS, activation='softmax')(x)
model = Model(inputs=inputs, outputs=x, name=model_name)
model.compile(optimizer=SGD(learning_rate=0.01, momentum=0.9), loss='categorical_crossentropy', metrics=['accuracy'])
model.summary()
return model
# Building 128 x 128 px model
def custom_model_128(X_train, NUM_CLASS, model_name):
tf.keras.backend.clear_session()
input_shape = (X_train.shape[1], X_train.shape[2], X_train.shape[3])
inputs = Input(shape=input_shape)
x = Conv2D(32, (3, 3), padding='same', activation='relu')(inputs)
x = MaxPooling2D(2, 2)(x)
x = Conv2D(64, (3, 3), padding='same', activation='relu')(x)
x = MaxPooling2D(2, 2)(x)
x = Conv2D(128, (3, 3), padding='same', activation='relu')(x)
x = MaxPooling2D(2, 2)(x)
x = Conv2D(256, (3, 3), padding='same', activation='relu')(x)
x = MaxPooling2D(2, 2)(x)
x = Flatten()(x)
x = Dense(256, activation='relu')(x)
x = Dropout(0.5)(x)
x = Dense(NUM_CLASS, activation='softmax')(x)
model = Model(inputs=inputs, outputs=x, name=model_name)
model.compile(optimizer=SGD(learning_rate=0.01, momentum=0.9), loss='categorical_crossentropy', metrics=['accuracy'])
model.summary()
return model
IMAGE SIZE : 31 X 31 PX
custom31 = custom_model_31(X_train_31, NUM_CLASS, model_name="CustomModel31")
Model: "CustomModel31"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 31, 31, 1)] 0
conv2d (Conv2D) (None, 31, 31, 16) 160
max_pooling2d (MaxPooling2D (None, 15, 15, 16) 0
)
conv2d_1 (Conv2D) (None, 15, 15, 32) 4640
max_pooling2d_1 (MaxPooling (None, 7, 7, 32) 0
2D)
conv2d_2 (Conv2D) (None, 7, 7, 64) 18496
max_pooling2d_2 (MaxPooling (None, 3, 3, 64) 0
2D)
flatten (Flatten) (None, 576) 0
dropout (Dropout) (None, 576) 0
dense (Dense) (None, 128) 73856
dense_1 (Dense) (None, 15) 1935
=================================================================
Total params: 99,087
Trainable params: 99,087
Non-trainable params: 0
_________________________________________________________________
FITTING THE DATA TO THE CUSTOMNET MODEL
Fit the data using model.fit. Apply early stopping to the model as well to try and prevent overfitting of data.
customNet31History = custom31.fit(X_train_31, y_train_31, epochs=50, validation_data=(X_val_31, y_val_31), batch_size=32, callbacks=EarlyStopping(monitor='val_accuracy', patience=10, restore_best_weights=True))
Epoch 1/50 448/448 [==============================] - 3s 6ms/step - loss: 2.6561 - accuracy: 0.1075 - val_loss: 2.3953 - val_accuracy: 0.2260 Epoch 2/50 448/448 [==============================] - 2s 5ms/step - loss: 2.1864 - accuracy: 0.2931 - val_loss: 1.9194 - val_accuracy: 0.3847 Epoch 3/50 448/448 [==============================] - 2s 5ms/step - loss: 1.6865 - accuracy: 0.4517 - val_loss: 1.3440 - val_accuracy: 0.5693 Epoch 4/50 448/448 [==============================] - 2s 5ms/step - loss: 1.3484 - accuracy: 0.5631 - val_loss: 1.0708 - val_accuracy: 0.6663 Epoch 5/50 448/448 [==============================] - 2s 5ms/step - loss: 1.1050 - accuracy: 0.6457 - val_loss: 1.2579 - val_accuracy: 0.5883 Epoch 6/50 448/448 [==============================] - 2s 5ms/step - loss: 0.9472 - accuracy: 0.6945 - val_loss: 0.7542 - val_accuracy: 0.7637 Epoch 7/50 448/448 [==============================] - 2s 5ms/step - loss: 0.8360 - accuracy: 0.7308 - val_loss: 0.7065 - val_accuracy: 0.7853 Epoch 8/50 448/448 [==============================] - 2s 5ms/step - loss: 0.7424 - accuracy: 0.7631 - val_loss: 0.6200 - val_accuracy: 0.8100 Epoch 9/50 448/448 [==============================] - 2s 5ms/step - loss: 0.6534 - accuracy: 0.7830 - val_loss: 0.7769 - val_accuracy: 0.7667 Epoch 10/50 448/448 [==============================] - 2s 5ms/step - loss: 0.5938 - accuracy: 0.8057 - val_loss: 0.5912 - val_accuracy: 0.8110 Epoch 11/50 448/448 [==============================] - 2s 5ms/step - loss: 0.5294 - accuracy: 0.8276 - val_loss: 0.6002 - val_accuracy: 0.8243 Epoch 12/50 448/448 [==============================] - 2s 5ms/step - loss: 0.5050 - accuracy: 0.8332 - val_loss: 0.4773 - val_accuracy: 0.8563 Epoch 13/50 448/448 [==============================] - 2s 5ms/step - loss: 0.4649 - accuracy: 0.8461 - val_loss: 0.5382 - val_accuracy: 0.8320 Epoch 14/50 448/448 [==============================] - 2s 5ms/step - loss: 0.4377 - accuracy: 0.8580 - val_loss: 0.4497 - val_accuracy: 0.8663 Epoch 15/50 448/448 [==============================] - 2s 5ms/step - loss: 0.4279 - accuracy: 0.8570 - val_loss: 0.5174 - val_accuracy: 0.8463 Epoch 16/50 448/448 [==============================] - 2s 5ms/step - loss: 0.3944 - accuracy: 0.8692 - val_loss: 0.4260 - val_accuracy: 0.8740 Epoch 17/50 448/448 [==============================] - 2s 5ms/step - loss: 0.3781 - accuracy: 0.8759 - val_loss: 0.4907 - val_accuracy: 0.8703 Epoch 18/50 448/448 [==============================] - 2s 5ms/step - loss: 0.3585 - accuracy: 0.8813 - val_loss: 0.4691 - val_accuracy: 0.8593 Epoch 19/50 448/448 [==============================] - 2s 5ms/step - loss: 0.3592 - accuracy: 0.8838 - val_loss: 0.4291 - val_accuracy: 0.8717 Epoch 20/50 448/448 [==============================] - 2s 5ms/step - loss: 0.3281 - accuracy: 0.8909 - val_loss: 0.4524 - val_accuracy: 0.8743 Epoch 21/50 448/448 [==============================] - 2s 5ms/step - loss: 0.3325 - accuracy: 0.8898 - val_loss: 0.4751 - val_accuracy: 0.8703 Epoch 22/50 448/448 [==============================] - 2s 5ms/step - loss: 0.3150 - accuracy: 0.8950 - val_loss: 0.4826 - val_accuracy: 0.8573 Epoch 23/50 448/448 [==============================] - 2s 5ms/step - loss: 0.3219 - accuracy: 0.8921 - val_loss: 0.4632 - val_accuracy: 0.8773 Epoch 24/50 448/448 [==============================] - 2s 5ms/step - loss: 0.3140 - accuracy: 0.8988 - val_loss: 0.4131 - val_accuracy: 0.8877 Epoch 25/50 448/448 [==============================] - 2s 5ms/step - loss: 0.2789 - accuracy: 0.9081 - val_loss: 0.4841 - val_accuracy: 0.8683 Epoch 26/50 448/448 [==============================] - 2s 5ms/step - loss: 0.2527 - accuracy: 0.9158 - val_loss: 0.3863 - val_accuracy: 0.8973 Epoch 27/50 448/448 [==============================] - 2s 5ms/step - loss: 0.2723 - accuracy: 0.9097 - val_loss: 0.4229 - val_accuracy: 0.8850 Epoch 28/50 448/448 [==============================] - 2s 5ms/step - loss: 0.2560 - accuracy: 0.9150 - val_loss: 0.5184 - val_accuracy: 0.8677 Epoch 29/50 448/448 [==============================] - 2s 5ms/step - loss: 0.2632 - accuracy: 0.9124 - val_loss: 0.4802 - val_accuracy: 0.8727 Epoch 30/50 448/448 [==============================] - 2s 5ms/step - loss: 0.2684 - accuracy: 0.9134 - val_loss: 0.3976 - val_accuracy: 0.8917 Epoch 31/50 448/448 [==============================] - 2s 5ms/step - loss: 0.2579 - accuracy: 0.9134 - val_loss: 0.4490 - val_accuracy: 0.8810 Epoch 32/50 448/448 [==============================] - 2s 5ms/step - loss: 0.2362 - accuracy: 0.9221 - val_loss: 0.4599 - val_accuracy: 0.8747 Epoch 33/50 448/448 [==============================] - 2s 5ms/step - loss: 0.2214 - accuracy: 0.9272 - val_loss: 0.4604 - val_accuracy: 0.8750 Epoch 34/50 448/448 [==============================] - 2s 5ms/step - loss: 0.2246 - accuracy: 0.9260 - val_loss: 0.4410 - val_accuracy: 0.8957 Epoch 35/50 448/448 [==============================] - 2s 5ms/step - loss: 0.2231 - accuracy: 0.9265 - val_loss: 0.4559 - val_accuracy: 0.8877 Epoch 36/50 448/448 [==============================] - 2s 5ms/step - loss: 0.2328 - accuracy: 0.9247 - val_loss: 0.4612 - val_accuracy: 0.8787
EXTRACT THE TRAINING HISTORY OF THE CUSTOMNET MODEL INTO A DICTIONARY
.history to extract the training data from the custom model.pd.concat() to display the results.customNet31History = customNet31History.history
# Get model results
result_df = compile_results(customNet31History, "customModel31", 32, result_df)
display(result_df.iloc[18])
Model Name customModel31 Epochs 36 Batch Size 32 Train Loss 0.252661 Val Loss 0.386302 Train Acc 0.915812 Val Acc 0.897333 [Train - Val] Acc 0.018478 Name: 18, dtype: object
PLOTTING THE LOSS AND ACCURACY CURVE
#Plotting the loss and accuracy curve
plot_loss_curve(customNet31History)
plt.show()
IMAGE SIZE : 128 X 128 PX
custom128 = custom_model_128(X_train_128, NUM_CLASS, model_name="CustomModel128")
Model: "CustomModel128"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 128, 128, 1)] 0
conv2d (Conv2D) (None, 128, 128, 32) 320
max_pooling2d (MaxPooling2D (None, 64, 64, 32) 0
)
conv2d_1 (Conv2D) (None, 64, 64, 64) 18496
max_pooling2d_1 (MaxPooling (None, 32, 32, 64) 0
2D)
conv2d_2 (Conv2D) (None, 32, 32, 128) 73856
max_pooling2d_2 (MaxPooling (None, 16, 16, 128) 0
2D)
conv2d_3 (Conv2D) (None, 16, 16, 256) 295168
max_pooling2d_3 (MaxPooling (None, 8, 8, 256) 0
2D)
flatten (Flatten) (None, 16384) 0
dense (Dense) (None, 256) 4194560
dropout (Dropout) (None, 256) 0
dense_1 (Dense) (None, 15) 3855
=================================================================
Total params: 4,586,255
Trainable params: 4,586,255
Non-trainable params: 0
_________________________________________________________________
FITTING THE DATA TO THE CUSTOMNET MODEL
Fit the data using model.fit. Apply early stopping to the model as well to try and prevent overfitting of data.
customNet128History = custom128.fit(X_train_128, y_train_128, epochs=50, validation_data=(X_val_128, y_val_128), batch_size=32, callbacks=EarlyStopping(monitor='val_accuracy', patience=10, restore_best_weights=True))
Epoch 1/50 448/448 [==============================] - 5s 11ms/step - loss: 2.5811 - accuracy: 0.1447 - val_loss: 2.4664 - val_accuracy: 0.1977 Epoch 2/50 448/448 [==============================] - 5s 10ms/step - loss: 1.8590 - accuracy: 0.4131 - val_loss: 1.4473 - val_accuracy: 0.5407 Epoch 3/50 448/448 [==============================] - 5s 10ms/step - loss: 1.0920 - accuracy: 0.6583 - val_loss: 0.7640 - val_accuracy: 0.7833 Epoch 4/50 448/448 [==============================] - 5s 10ms/step - loss: 0.6034 - accuracy: 0.8142 - val_loss: 0.5681 - val_accuracy: 0.8340 Epoch 5/50 448/448 [==============================] - 5s 10ms/step - loss: 0.3672 - accuracy: 0.8845 - val_loss: 0.5495 - val_accuracy: 0.8490 Epoch 6/50 448/448 [==============================] - 5s 10ms/step - loss: 0.2446 - accuracy: 0.9196 - val_loss: 0.5134 - val_accuracy: 0.8660 Epoch 7/50 448/448 [==============================] - 5s 10ms/step - loss: 0.1925 - accuracy: 0.9394 - val_loss: 0.5303 - val_accuracy: 0.8663 Epoch 8/50 448/448 [==============================] - 5s 10ms/step - loss: 0.1360 - accuracy: 0.9573 - val_loss: 0.4590 - val_accuracy: 0.8877 Epoch 9/50 448/448 [==============================] - 5s 10ms/step - loss: 0.1069 - accuracy: 0.9650 - val_loss: 0.4298 - val_accuracy: 0.8990 Epoch 10/50 448/448 [==============================] - 5s 10ms/step - loss: 0.0843 - accuracy: 0.9714 - val_loss: 0.4688 - val_accuracy: 0.8940 Epoch 11/50 448/448 [==============================] - 5s 10ms/step - loss: 0.0862 - accuracy: 0.9731 - val_loss: 0.4716 - val_accuracy: 0.9013 Epoch 12/50 448/448 [==============================] - 5s 10ms/step - loss: 0.0699 - accuracy: 0.9776 - val_loss: 0.3925 - val_accuracy: 0.9060 Epoch 13/50 448/448 [==============================] - 5s 10ms/step - loss: 0.0597 - accuracy: 0.9802 - val_loss: 0.4046 - val_accuracy: 0.9063 Epoch 14/50 448/448 [==============================] - 5s 10ms/step - loss: 0.0618 - accuracy: 0.9794 - val_loss: 0.4722 - val_accuracy: 0.9047 Epoch 15/50 448/448 [==============================] - 5s 10ms/step - loss: 0.0502 - accuracy: 0.9842 - val_loss: 0.4346 - val_accuracy: 0.9053 Epoch 16/50 448/448 [==============================] - 5s 10ms/step - loss: 0.0432 - accuracy: 0.9862 - val_loss: 0.4540 - val_accuracy: 0.9023 Epoch 17/50 448/448 [==============================] - 5s 10ms/step - loss: 0.0370 - accuracy: 0.9878 - val_loss: 0.4848 - val_accuracy: 0.9083 Epoch 18/50 448/448 [==============================] - 5s 10ms/step - loss: 0.0374 - accuracy: 0.9880 - val_loss: 0.5297 - val_accuracy: 0.8967 Epoch 19/50 448/448 [==============================] - 5s 10ms/step - loss: 0.0453 - accuracy: 0.9854 - val_loss: 0.5148 - val_accuracy: 0.8977 Epoch 20/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0405 - accuracy: 0.9871 - val_loss: 0.4808 - val_accuracy: 0.9127 Epoch 21/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0420 - accuracy: 0.9872 - val_loss: 0.4730 - val_accuracy: 0.9077 Epoch 22/50 448/448 [==============================] - 5s 10ms/step - loss: 0.0321 - accuracy: 0.9895 - val_loss: 0.4930 - val_accuracy: 0.9087 Epoch 23/50 448/448 [==============================] - 5s 10ms/step - loss: 0.0220 - accuracy: 0.9928 - val_loss: 0.5265 - val_accuracy: 0.9040 Epoch 24/50 448/448 [==============================] - 5s 10ms/step - loss: 0.0203 - accuracy: 0.9932 - val_loss: 0.5479 - val_accuracy: 0.9087 Epoch 25/50 448/448 [==============================] - 5s 10ms/step - loss: 0.0275 - accuracy: 0.9911 - val_loss: 0.4858 - val_accuracy: 0.9067 Epoch 26/50 448/448 [==============================] - 5s 10ms/step - loss: 0.0197 - accuracy: 0.9932 - val_loss: 0.4426 - val_accuracy: 0.9183 Epoch 27/50 448/448 [==============================] - 5s 10ms/step - loss: 0.0222 - accuracy: 0.9926 - val_loss: 0.5089 - val_accuracy: 0.9137 Epoch 28/50 448/448 [==============================] - 5s 10ms/step - loss: 0.0176 - accuracy: 0.9943 - val_loss: 0.6245 - val_accuracy: 0.8997 Epoch 29/50 448/448 [==============================] - 5s 10ms/step - loss: 0.0215 - accuracy: 0.9935 - val_loss: 0.5476 - val_accuracy: 0.9107 Epoch 30/50 448/448 [==============================] - 5s 10ms/step - loss: 0.0179 - accuracy: 0.9939 - val_loss: 0.6193 - val_accuracy: 0.9060 Epoch 31/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0249 - accuracy: 0.9919 - val_loss: 0.6974 - val_accuracy: 0.8920 Epoch 32/50 448/448 [==============================] - 5s 10ms/step - loss: 0.0203 - accuracy: 0.9933 - val_loss: 0.5172 - val_accuracy: 0.9193 Epoch 33/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0212 - accuracy: 0.9939 - val_loss: 0.4853 - val_accuracy: 0.9213 Epoch 34/50 448/448 [==============================] - 5s 10ms/step - loss: 0.0159 - accuracy: 0.9945 - val_loss: 0.4749 - val_accuracy: 0.9150 Epoch 35/50 448/448 [==============================] - 5s 10ms/step - loss: 0.0139 - accuracy: 0.9952 - val_loss: 0.5136 - val_accuracy: 0.9207 Epoch 36/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0130 - accuracy: 0.9957 - val_loss: 0.5640 - val_accuracy: 0.9107 Epoch 37/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0138 - accuracy: 0.9955 - val_loss: 0.5805 - val_accuracy: 0.9093 Epoch 38/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0150 - accuracy: 0.9952 - val_loss: 0.5022 - val_accuracy: 0.9157 Epoch 39/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0194 - accuracy: 0.9942 - val_loss: 0.6414 - val_accuracy: 0.8990 Epoch 40/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0136 - accuracy: 0.9960 - val_loss: 0.5629 - val_accuracy: 0.9123 Epoch 41/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0107 - accuracy: 0.9966 - val_loss: 0.5073 - val_accuracy: 0.9220 Epoch 42/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0127 - accuracy: 0.9957 - val_loss: 0.5788 - val_accuracy: 0.9153 Epoch 43/50 448/448 [==============================] - 5s 10ms/step - loss: 0.0110 - accuracy: 0.9965 - val_loss: 0.5654 - val_accuracy: 0.9163 Epoch 44/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0144 - accuracy: 0.9962 - val_loss: 0.4944 - val_accuracy: 0.9223 Epoch 45/50 448/448 [==============================] - 5s 10ms/step - loss: 0.0163 - accuracy: 0.9946 - val_loss: 0.4862 - val_accuracy: 0.9130 Epoch 46/50 448/448 [==============================] - 5s 10ms/step - loss: 0.0134 - accuracy: 0.9960 - val_loss: 0.4833 - val_accuracy: 0.9267 Epoch 47/50 448/448 [==============================] - 5s 10ms/step - loss: 0.0089 - accuracy: 0.9974 - val_loss: 0.4745 - val_accuracy: 0.9273 Epoch 48/50 448/448 [==============================] - 5s 10ms/step - loss: 0.0142 - accuracy: 0.9955 - val_loss: 0.5599 - val_accuracy: 0.9110 Epoch 49/50 448/448 [==============================] - 5s 10ms/step - loss: 0.0333 - accuracy: 0.9907 - val_loss: 0.4801 - val_accuracy: 0.9160 Epoch 50/50 448/448 [==============================] - 5s 10ms/step - loss: 0.0124 - accuracy: 0.9957 - val_loss: 0.6283 - val_accuracy: 0.9040
EXTRACT THE TRAINING HISTORY OF THE CUSTOMNET MODEL INTO A DICTIONARY
.history to extract the training data from the custom model.pd.concat() to display the results.customNet128History = customNet128History.history
# Get model results
result_df = compile_results(customNet128History, "customModel128", 32, result_df)
display(result_df.iloc[19])
Model Name customModel128 Epochs 50 Batch Size 32 Train Loss 0.008938 Val Loss 0.474456 Train Acc 0.997417 Val Acc 0.927333 [Train - Val] Acc 0.070084 Name: 19, dtype: object
PLOTTING THE LOSS AND ACCURACY CURVE
# Plotting the loss and accuracy curve
plot_loss_curve(customNet128History)
plt.show()
IMAGE SIZE : 31 X 31 PX
custom31Augmented = custom_model_31(X_train_31_aug, NUM_CLASS, model_name="CustomModel31Augmented")
Model: "CustomModel31Augmented"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 31, 31, 1)] 0
conv2d (Conv2D) (None, 31, 31, 16) 160
max_pooling2d (MaxPooling2D (None, 15, 15, 16) 0
)
conv2d_1 (Conv2D) (None, 15, 15, 32) 4640
max_pooling2d_1 (MaxPooling (None, 7, 7, 32) 0
2D)
conv2d_2 (Conv2D) (None, 7, 7, 64) 18496
max_pooling2d_2 (MaxPooling (None, 3, 3, 64) 0
2D)
flatten (Flatten) (None, 576) 0
dropout (Dropout) (None, 576) 0
dense (Dense) (None, 128) 73856
dense_1 (Dense) (None, 15) 1935
=================================================================
Total params: 99,087
Trainable params: 99,087
Non-trainable params: 0
_________________________________________________________________
FITTING THE DATA TO THE CUSTOMNET MODEL
Fit the data using model.fit. Apply early stopping to the model as well to try and prevent overfitting of data.
customNet31HistoryAugmented = custom31Augmented.fit(X_train_31_aug, y_train_31, epochs=50, validation_data=(X_val_31, y_val_31), batch_size=32, callbacks=EarlyStopping(monitor='val_accuracy', patience=10, restore_best_weights=True))
Epoch 1/50 448/448 [==============================] - 3s 6ms/step - loss: 2.6493 - accuracy: 0.0990 - val_loss: 2.4314 - val_accuracy: 0.2130 Epoch 2/50 448/448 [==============================] - 2s 5ms/step - loss: 2.3961 - accuracy: 0.1994 - val_loss: 2.0832 - val_accuracy: 0.3793 Epoch 3/50 448/448 [==============================] - 2s 5ms/step - loss: 2.1159 - accuracy: 0.3005 - val_loss: 1.7185 - val_accuracy: 0.4597 Epoch 4/50 448/448 [==============================] - 2s 5ms/step - loss: 1.9123 - accuracy: 0.3716 - val_loss: 1.5193 - val_accuracy: 0.5160 Epoch 5/50 448/448 [==============================] - 2s 5ms/step - loss: 1.7078 - accuracy: 0.4358 - val_loss: 1.3713 - val_accuracy: 0.5670 Epoch 6/50 448/448 [==============================] - 2s 5ms/step - loss: 1.4893 - accuracy: 0.5096 - val_loss: 1.1271 - val_accuracy: 0.6510 Epoch 7/50 448/448 [==============================] - 2s 5ms/step - loss: 1.3794 - accuracy: 0.5444 - val_loss: 1.0949 - val_accuracy: 0.6640 Epoch 8/50 448/448 [==============================] - 2s 5ms/step - loss: 1.2651 - accuracy: 0.5837 - val_loss: 1.0296 - val_accuracy: 0.6730 Epoch 9/50 448/448 [==============================] - 2s 5ms/step - loss: 1.1328 - accuracy: 0.6264 - val_loss: 0.7739 - val_accuracy: 0.7583 Epoch 10/50 448/448 [==============================] - 2s 5ms/step - loss: 1.0195 - accuracy: 0.6635 - val_loss: 0.7943 - val_accuracy: 0.7460 Epoch 11/50 448/448 [==============================] - 2s 5ms/step - loss: 0.9599 - accuracy: 0.6833 - val_loss: 0.6644 - val_accuracy: 0.7993 Epoch 12/50 448/448 [==============================] - 2s 5ms/step - loss: 0.9274 - accuracy: 0.6928 - val_loss: 0.6485 - val_accuracy: 0.8030 Epoch 13/50 448/448 [==============================] - 2s 5ms/step - loss: 0.8593 - accuracy: 0.7179 - val_loss: 0.6079 - val_accuracy: 0.8243 Epoch 14/50 448/448 [==============================] - 2s 5ms/step - loss: 0.7957 - accuracy: 0.7341 - val_loss: 0.6008 - val_accuracy: 0.8173 Epoch 15/50 448/448 [==============================] - 2s 5ms/step - loss: 0.7565 - accuracy: 0.7474 - val_loss: 0.5520 - val_accuracy: 0.8290 Epoch 16/50 448/448 [==============================] - 2s 5ms/step - loss: 0.7241 - accuracy: 0.7559 - val_loss: 0.6832 - val_accuracy: 0.7870 Epoch 17/50 448/448 [==============================] - 2s 5ms/step - loss: 0.7156 - accuracy: 0.7595 - val_loss: 0.5973 - val_accuracy: 0.8193 Epoch 18/50 448/448 [==============================] - 2s 5ms/step - loss: 0.6563 - accuracy: 0.7808 - val_loss: 0.4866 - val_accuracy: 0.8487 Epoch 19/50 448/448 [==============================] - 2s 5ms/step - loss: 0.6568 - accuracy: 0.7786 - val_loss: 0.5103 - val_accuracy: 0.8487 Epoch 20/50 448/448 [==============================] - 2s 5ms/step - loss: 0.6469 - accuracy: 0.7858 - val_loss: 0.5748 - val_accuracy: 0.8260 Epoch 21/50 448/448 [==============================] - 2s 5ms/step - loss: 0.6063 - accuracy: 0.7944 - val_loss: 0.4774 - val_accuracy: 0.8547 Epoch 22/50 448/448 [==============================] - 2s 5ms/step - loss: 0.6176 - accuracy: 0.7925 - val_loss: 0.5125 - val_accuracy: 0.8433 Epoch 23/50 448/448 [==============================] - 2s 5ms/step - loss: 0.5822 - accuracy: 0.8098 - val_loss: 0.4733 - val_accuracy: 0.8560 Epoch 24/50 448/448 [==============================] - 2s 5ms/step - loss: 0.5550 - accuracy: 0.8153 - val_loss: 0.5459 - val_accuracy: 0.8340 Epoch 25/50 448/448 [==============================] - 2s 5ms/step - loss: 0.5534 - accuracy: 0.8157 - val_loss: 0.4974 - val_accuracy: 0.8473 Epoch 26/50 448/448 [==============================] - 2s 5ms/step - loss: 0.5395 - accuracy: 0.8189 - val_loss: 0.5454 - val_accuracy: 0.8373 Epoch 27/50 448/448 [==============================] - 2s 5ms/step - loss: 0.5275 - accuracy: 0.8221 - val_loss: 0.4627 - val_accuracy: 0.8567 Epoch 28/50 448/448 [==============================] - 2s 5ms/step - loss: 0.5041 - accuracy: 0.8352 - val_loss: 0.5376 - val_accuracy: 0.8403 Epoch 29/50 448/448 [==============================] - 2s 5ms/step - loss: 0.4866 - accuracy: 0.8374 - val_loss: 0.4609 - val_accuracy: 0.8633 Epoch 30/50 448/448 [==============================] - 2s 5ms/step - loss: 0.5144 - accuracy: 0.8279 - val_loss: 0.4830 - val_accuracy: 0.8593 Epoch 31/50 448/448 [==============================] - 2s 5ms/step - loss: 0.5063 - accuracy: 0.8309 - val_loss: 0.5523 - val_accuracy: 0.8243 Epoch 32/50 448/448 [==============================] - 2s 5ms/step - loss: 0.4926 - accuracy: 0.8325 - val_loss: 0.4723 - val_accuracy: 0.8520 Epoch 33/50 448/448 [==============================] - 2s 5ms/step - loss: 0.4603 - accuracy: 0.8454 - val_loss: 0.6245 - val_accuracy: 0.8250 Epoch 34/50 448/448 [==============================] - 2s 5ms/step - loss: 0.4773 - accuracy: 0.8409 - val_loss: 0.4506 - val_accuracy: 0.8690 Epoch 35/50 448/448 [==============================] - 2s 5ms/step - loss: 0.4666 - accuracy: 0.8439 - val_loss: 0.4949 - val_accuracy: 0.8560 Epoch 36/50 448/448 [==============================] - 2s 5ms/step - loss: 0.4707 - accuracy: 0.8430 - val_loss: 0.4725 - val_accuracy: 0.8653 Epoch 37/50 448/448 [==============================] - 2s 5ms/step - loss: 0.4465 - accuracy: 0.8520 - val_loss: 0.4954 - val_accuracy: 0.8593 Epoch 38/50 448/448 [==============================] - 2s 5ms/step - loss: 0.4350 - accuracy: 0.8552 - val_loss: 0.4235 - val_accuracy: 0.8760 Epoch 39/50 448/448 [==============================] - 2s 5ms/step - loss: 0.4345 - accuracy: 0.8535 - val_loss: 0.4867 - val_accuracy: 0.8610 Epoch 40/50 448/448 [==============================] - 2s 5ms/step - loss: 0.4383 - accuracy: 0.8553 - val_loss: 0.4531 - val_accuracy: 0.8687 Epoch 41/50 448/448 [==============================] - 2s 5ms/step - loss: 0.4311 - accuracy: 0.8547 - val_loss: 0.5157 - val_accuracy: 0.8527 Epoch 42/50 448/448 [==============================] - 2s 5ms/step - loss: 0.4312 - accuracy: 0.8570 - val_loss: 0.4816 - val_accuracy: 0.8540 Epoch 43/50 448/448 [==============================] - 2s 5ms/step - loss: 0.4175 - accuracy: 0.8600 - val_loss: 0.4632 - val_accuracy: 0.8660 Epoch 44/50 448/448 [==============================] - 2s 5ms/step - loss: 0.4475 - accuracy: 0.8517 - val_loss: 0.4816 - val_accuracy: 0.8567 Epoch 45/50 448/448 [==============================] - 2s 5ms/step - loss: 0.4042 - accuracy: 0.8674 - val_loss: 0.4592 - val_accuracy: 0.8690 Epoch 46/50 448/448 [==============================] - 2s 5ms/step - loss: 0.4070 - accuracy: 0.8673 - val_loss: 0.4511 - val_accuracy: 0.8727 Epoch 47/50 448/448 [==============================] - 2s 5ms/step - loss: 0.4283 - accuracy: 0.8609 - val_loss: 0.4552 - val_accuracy: 0.8643 Epoch 48/50 448/448 [==============================] - 2s 5ms/step - loss: 0.4029 - accuracy: 0.8670 - val_loss: 0.4311 - val_accuracy: 0.8793 Epoch 49/50 448/448 [==============================] - 2s 5ms/step - loss: 0.4076 - accuracy: 0.8658 - val_loss: 0.4215 - val_accuracy: 0.8823 Epoch 50/50 448/448 [==============================] - 2s 5ms/step - loss: 0.3867 - accuracy: 0.8726 - val_loss: 0.4473 - val_accuracy: 0.8643
EXTRACT THE TRAINING HISTORY OF THE CUSTOMNET MODEL INTO A DICTIONARY
.history to extract the training data from the custom model.pd.concat() to display the results.customNet31HistoryAugmented = customNet31HistoryAugmented.history
# Get model results
result_df = compile_results(customNet31HistoryAugmented, "customModel31_Augmented", 32, result_df)
display(result_df.iloc[20])
Model Name customModel31_Augmented Epochs 50 Batch Size 32 Train Loss 0.407583 Val Loss 0.42153 Train Acc 0.865759 Val Acc 0.882333 [Train - Val] Acc -0.016574 Name: 20, dtype: object
PLOTTING THE LOSS AND ACCURACY CURVE
plot_loss_curve(customNet31HistoryAugmented)
plt.show()
IMAGE SIZE : 128 X 128 PX
custom128Augmented = custom_model_128(X_train_128_aug, NUM_CLASS, model_name="CustomModel128Augmented")
Model: "CustomModel128Augmented"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 128, 128, 1)] 0
conv2d (Conv2D) (None, 128, 128, 32) 320
max_pooling2d (MaxPooling2D (None, 64, 64, 32) 0
)
conv2d_1 (Conv2D) (None, 64, 64, 64) 18496
max_pooling2d_1 (MaxPooling (None, 32, 32, 64) 0
2D)
conv2d_2 (Conv2D) (None, 32, 32, 128) 73856
max_pooling2d_2 (MaxPooling (None, 16, 16, 128) 0
2D)
conv2d_3 (Conv2D) (None, 16, 16, 256) 295168
max_pooling2d_3 (MaxPooling (None, 8, 8, 256) 0
2D)
flatten (Flatten) (None, 16384) 0
dense (Dense) (None, 256) 4194560
dropout (Dropout) (None, 256) 0
dense_1 (Dense) (None, 15) 3855
=================================================================
Total params: 4,586,255
Trainable params: 4,586,255
Non-trainable params: 0
_________________________________________________________________
FITTING THE DATA TO THE CUSTOMNET MODEL
Fit the data using model.fit. Apply early stopping to the model as well to try and prevent overfitting of data.
customNet128HistoryAugmented = custom128Augmented.fit(X_train_128_aug, y_train_128, epochs=50, validation_data=(X_val_128, y_val_128), batch_size=32, callbacks=EarlyStopping(monitor='val_accuracy', patience=10, restore_best_weights=True))
Epoch 1/50 448/448 [==============================] - 5s 11ms/step - loss: 2.5530 - accuracy: 0.1461 - val_loss: 2.0046 - val_accuracy: 0.3627 Epoch 2/50 448/448 [==============================] - 5s 11ms/step - loss: 1.9726 - accuracy: 0.3559 - val_loss: 1.4196 - val_accuracy: 0.5703 Epoch 3/50 448/448 [==============================] - 5s 11ms/step - loss: 1.4439 - accuracy: 0.5287 - val_loss: 0.9621 - val_accuracy: 0.7133 Epoch 4/50 448/448 [==============================] - 5s 10ms/step - loss: 0.9413 - accuracy: 0.6947 - val_loss: 0.5701 - val_accuracy: 0.8250 Epoch 5/50 448/448 [==============================] - 5s 11ms/step - loss: 0.6205 - accuracy: 0.8006 - val_loss: 0.4196 - val_accuracy: 0.8713 Epoch 6/50 448/448 [==============================] - 5s 10ms/step - loss: 0.4177 - accuracy: 0.8637 - val_loss: 0.4081 - val_accuracy: 0.8747 Epoch 7/50 448/448 [==============================] - 5s 10ms/step - loss: 0.2849 - accuracy: 0.9073 - val_loss: 0.3724 - val_accuracy: 0.8877 Epoch 8/50 448/448 [==============================] - 5s 11ms/step - loss: 0.2119 - accuracy: 0.9285 - val_loss: 0.4003 - val_accuracy: 0.8880 Epoch 9/50 448/448 [==============================] - 5s 11ms/step - loss: 0.1645 - accuracy: 0.9451 - val_loss: 0.3015 - val_accuracy: 0.9143 Epoch 10/50 448/448 [==============================] - 5s 11ms/step - loss: 0.1320 - accuracy: 0.9571 - val_loss: 0.3194 - val_accuracy: 0.9177 Epoch 11/50 448/448 [==============================] - 5s 11ms/step - loss: 0.1207 - accuracy: 0.9610 - val_loss: 0.2853 - val_accuracy: 0.9217 Epoch 12/50 448/448 [==============================] - 5s 10ms/step - loss: 0.0992 - accuracy: 0.9670 - val_loss: 0.3152 - val_accuracy: 0.9167 Epoch 13/50 448/448 [==============================] - 5s 10ms/step - loss: 0.0947 - accuracy: 0.9710 - val_loss: 0.4700 - val_accuracy: 0.8760 Epoch 14/50 448/448 [==============================] - 5s 10ms/step - loss: 0.0815 - accuracy: 0.9724 - val_loss: 0.3558 - val_accuracy: 0.9120 Epoch 15/50 448/448 [==============================] - 5s 10ms/step - loss: 0.0832 - accuracy: 0.9736 - val_loss: 0.3582 - val_accuracy: 0.9090 Epoch 16/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0534 - accuracy: 0.9827 - val_loss: 0.3223 - val_accuracy: 0.9167 Epoch 17/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0569 - accuracy: 0.9813 - val_loss: 0.4773 - val_accuracy: 0.8907 Epoch 18/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0684 - accuracy: 0.9793 - val_loss: 0.4012 - val_accuracy: 0.9110 Epoch 19/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0475 - accuracy: 0.9841 - val_loss: 0.3137 - val_accuracy: 0.9297 Epoch 20/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0374 - accuracy: 0.9873 - val_loss: 0.3307 - val_accuracy: 0.9247 Epoch 21/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0430 - accuracy: 0.9855 - val_loss: 0.3697 - val_accuracy: 0.9130 Epoch 22/50 448/448 [==============================] - 5s 10ms/step - loss: 0.0458 - accuracy: 0.9855 - val_loss: 0.3508 - val_accuracy: 0.9220 Epoch 23/50 448/448 [==============================] - 5s 10ms/step - loss: 0.0425 - accuracy: 0.9861 - val_loss: 0.4168 - val_accuracy: 0.9203 Epoch 24/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0427 - accuracy: 0.9865 - val_loss: 0.3772 - val_accuracy: 0.9157 Epoch 25/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0446 - accuracy: 0.9858 - val_loss: 0.4025 - val_accuracy: 0.9130 Epoch 26/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0283 - accuracy: 0.9911 - val_loss: 0.3710 - val_accuracy: 0.9330 Epoch 27/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0315 - accuracy: 0.9897 - val_loss: 0.4078 - val_accuracy: 0.9187 Epoch 28/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0298 - accuracy: 0.9909 - val_loss: 0.3726 - val_accuracy: 0.9237 Epoch 29/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0251 - accuracy: 0.9919 - val_loss: 0.3875 - val_accuracy: 0.9320 Epoch 30/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0279 - accuracy: 0.9908 - val_loss: 0.4057 - val_accuracy: 0.9257 Epoch 31/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0217 - accuracy: 0.9934 - val_loss: 0.3740 - val_accuracy: 0.9317 Epoch 32/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0204 - accuracy: 0.9938 - val_loss: 0.3571 - val_accuracy: 0.9313 Epoch 33/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0256 - accuracy: 0.9913 - val_loss: 0.4053 - val_accuracy: 0.9230 Epoch 34/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0361 - accuracy: 0.9892 - val_loss: 0.4303 - val_accuracy: 0.9133 Epoch 35/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0195 - accuracy: 0.9934 - val_loss: 0.4195 - val_accuracy: 0.9187 Epoch 36/50 448/448 [==============================] - 5s 11ms/step - loss: 0.0345 - accuracy: 0.9892 - val_loss: 0.3755 - val_accuracy: 0.9230
EXTRACT THE TRAINING HISTORY OF THE CUSTOMNET MODEL INTO A DICTIONARY
.history to extract the training data from the custom model.pd.concat() to display the results.customNet128HistoryAugmented = customNet128HistoryAugmented.history
# Get model results
result_df = compile_results(customNet128HistoryAugmented, "customModel128_Augmented", 32, result_df)
display(result_df.iloc[21])
Model Name customModel128_Augmented Epochs 36 Batch Size 32 Train Loss 0.028265 Val Loss 0.371045 Train Acc 0.991134 Val Acc 0.933 [Train - Val] Acc 0.058134 Name: 21, dtype: object
PLOTTING THE LOSS AND ACCURACY CURVE
# Plotting the loss and accuracy curve
plot_loss_curve(customNet128HistoryAugmented)
plt.show()
From the CustomNet model for both image sizes,
Image Size with Higher Accuracy & Lower Loss : 128 X 128 PX + Augmented
Augmented Vs Non-Augmented Data : Similar to AlexNet, augmented data performed better for image size 128px but worse for image size 31px. This could indicate that the variability in augmented data helped the model generalize better by learning more robust feature representations.
CONCLUSION ON AUGMENTED VS UNAUGMENTED DATA :
AUGMENTED DATA WORKS BETTER FOR IMAGES WITH HIGHER RESOLUTION AND A CLEARER VIEW, BUT WORSE FOR IMAGES WITH LOWER RESOLUTION WITH A PIXELATED VIEW.
result_df.sort_values(by=["Val Acc", "Train Acc"], ascending=False).style.apply(
lambda x: [
"background-color: red; color: white" if v else "" for v in x == x.min()]).apply(
lambda x: [
"background-color: green; color: white" if v else "" for v in x == x.max()])
| Model Name | Epochs | Batch Size | Train Loss | Val Loss | Train Acc | Val Acc | [Train - Val] Acc | |
|---|---|---|---|---|---|---|---|---|
| 9 | customVGGModel_128 | 50 | 32 | 0.001637 | 0.025030 | 0.999791 | 0.993667 | 0.006124 |
| 13 | customVGG_128Augmented | 23 | 32 | 0.032988 | 0.041952 | 0.991483 | 0.988000 | 0.003483 |
| 11 | customVGGL2Model_128 | 35 | 32 | 0.130412 | 0.146854 | 0.993508 | 0.988333 | 0.005175 |
| 8 | customVGGModel_31 | 35 | 32 | 0.005419 | 0.161012 | 0.998394 | 0.962667 | 0.035728 |
| 17 | customAlexNetModel_128Augmented | 33 | 32 | 0.058011 | 0.200295 | 0.981571 | 0.950667 | 0.030904 |
| 12 | customVGGModel_31Augmented | 49 | 32 | 0.004733 | 0.208608 | 0.998813 | 0.955667 | 0.043147 |
| 7 | Conv2DModel_128Augmented | 45 | 32 | 0.043292 | 0.216016 | 0.987155 | 0.953333 | 0.033822 |
| 15 | customAlexNetModel_128 | 40 | 32 | 0.022377 | 0.238607 | 0.992600 | 0.951333 | 0.041267 |
| 10 | customVGGL2Model_31 | 50 | 32 | 0.166368 | 0.273309 | 0.990506 | 0.966667 | 0.023839 |
| 5 | Conv2DModel_128 | 43 | 32 | 0.025198 | 0.297138 | 0.991832 | 0.937000 | 0.054832 |
| 4 | Conv2DModel_31 | 42 | 32 | 0.074720 | 0.352616 | 0.975777 | 0.926667 | 0.049110 |
| 21 | customModel128_Augmented | 36 | 32 | 0.028265 | 0.371045 | 0.991134 | 0.933000 | 0.058134 |
| 14 | customAlexNet_31 | 35 | 32 | 0.063484 | 0.382737 | 0.978080 | 0.922000 | 0.056080 |
| 18 | customModel31 | 36 | 32 | 0.252661 | 0.386302 | 0.915812 | 0.897333 | 0.018478 |
| 16 | customAlexNetModel_31Augmented | 28 | 32 | 0.190195 | 0.401430 | 0.936405 | 0.899333 | 0.037072 |
| 6 | Conv2DModel_31Augmented | 50 | 32 | 0.122438 | 0.412674 | 0.958743 | 0.912667 | 0.046077 |
| 20 | customModel31_Augmented | 50 | 32 | 0.407583 | 0.421530 | 0.865759 | 0.882333 | -0.016574 |
| 19 | customModel128 | 50 | 32 | 0.008938 | 0.474456 | 0.997417 | 0.927333 | 0.070084 |
| 3 | BaselineModel128Augmented | 50 | 32 | 1.107781 | 2.362754 | 0.640419 | 0.439000 | 0.201419 |
| 1 | BaselineModel128 | 46 | 32 | 1.977940 | 2.420122 | 0.352949 | 0.270667 | 0.082283 |
| 2 | BaselineModel31Augmented | 50 | 32 | 0.540158 | 2.654329 | 0.821431 | 0.544000 | 0.277431 |
| 0 | BaselineModel31 | 50 | 32 | 0.125429 | 2.713443 | 0.959372 | 0.653000 | 0.306372 |
To hypertune the VGG model to achieve better results, we will be using the following methods :
Parameters Tuned :
We will be tuning the same model for both 31 px and 128 px since this model performed the best for both image sizes. For both image sizes, we will not use augmented data as unaugmented data was shown to provide better validation accuracies and a lower loss.
Models Used : customVGGModel_128 & customVGGModel_31
PERFORM HYPERPARAMETER TUNING FOR VGG MODEL
IMAGE SIZE : 31 X 31 PX
class TuneVGGModel(kt.HyperModel):
def __init__(self, input_shape, num_classes, X_train):
self.input_shape = input_shape
self.num_classes = num_classes
self.X_train = X_train
def build(self, hp):
inputs = tf.keras.Input(shape=self.input_shape)
# Hyperparameter Tuning
learning_rate = hp.Float("learning_rate", min_value=1e-3, max_value=1e-1, sampling="log")
momentum = hp.Float("momentum", min_value=0.5, max_value=0.9, step=0.1)
num_filters_1 = hp.Int("num_filters_1", min_value=16, max_value=64, step=16, default=32)
num_filters_2 = hp.Int("num_filters_2", min_value=32, max_value=128, step=32, default=64)
num_filters_3 = hp.Int("num_filters_3", min_value=64, max_value=256, step=64, default=128)
dense_units = hp.Choice("dense_units", values=[128, 256, 512], default=256)
dropout_rate = hp.Float("dropout_rate", min_value=0.1, max_value=0.5, step=0.1)
activation = hp.Choice("activation", values=['relu', 'elu', 'leaky_relu'])
reg_type = hp.Choice('regularizer', values=['none', 'l1', 'l2', 'l1_l2'])
reg_value = hp.Float("reg_value", min_value=1e-5, max_value=1e-3, sampling="log")
# Select regularizer
if reg_type == 'l1':
regularizer = l1(reg_value)
elif reg_type == 'l2':
regularizer = l2(reg_value)
elif reg_type == 'l1_l2':
regularizer = l1_l2(reg_value)
else:
regularizer=None
# Building the model with hyperparameters
x = vgg_block(2, num_filters_1)(inputs)
x = vgg_block(2, num_filters_2)(x)
x = vgg_block(2, num_filters_3)(x)
x = GlobalAveragePooling2D()(x)
x = Dense(dense_units, activation='relu')(x)
x = Dropout(dropout_rate)(x)
x = Dense(self.num_classes, activation='softmax')(x)
model = Model(inputs=inputs, outputs=x)
# Learning Rate Scheduler
steps_per_epoch = np.ceil(len(self.X_train) / 32)
scheduler = CosineDecay(initial_learning_rate=learning_rate, decay_steps=50 * steps_per_epoch)
model.compile(optimizer=SGD(learning_rate=scheduler, momentum=momentum), loss='categorical_crossentropy', metrics=['accuracy'])
return model
APPLYING KERAS TUNER TO THE HYPERTUNED MODEL
Based on the hypertuned model, we will now be applying Keras Tuner to search for the best model parameters to further optimize model performance and evaluate if hyperparamater tuning can provide an edge over the CustomVGG model without the tuned parameters.
We will try 5 different sets of hyperparameters with our VGG model.
IMAGE SIZE : 31 X 31 PX
VGGTuner31 = kt.RandomSearch(TuneVGGModel(input_shape=X_train_31[0].shape, num_classes=NUM_CLASS, X_train=X_train_31), objective="val_accuracy", overwrite=True, project_name="cnn_vgg_31", max_trials=5)
VGGTuner31.search(X_train_31, y_train_31, validation_data=(X_val_31, y_val_31), epochs=50, batch_size=32, callbacks=[EarlyStopping(monitor="val_accuracy", patience=10, restore_best_weights=True)])
Trial 5 Complete [00h 03m 24s] val_accuracy: 0.9646666646003723 Best val_accuracy So Far: 0.9670000076293945 Total elapsed time: 00h 16m 26s
VGGTuner31.results_summary()
Results summary Results in .\cnn_vgg_31 Showing 10 best trials Objective(name="val_accuracy", direction="max") Trial 1 summary Hyperparameters: learning_rate: 0.09527285831947352 momentum: 0.5 num_filters_1: 32 num_filters_2: 32 num_filters_3: 128 dense_units: 256 dropout_rate: 0.4 activation: leaky_relu regularizer: l1_l2 reg_value: 0.00013759739520654657 Score: 0.9670000076293945 Trial 4 summary Hyperparameters: learning_rate: 0.00432025255103857 momentum: 0.8 num_filters_1: 16 num_filters_2: 96 num_filters_3: 256 dense_units: 128 dropout_rate: 0.4 activation: elu regularizer: l2 reg_value: 1.547021132868381e-05 Score: 0.9646666646003723 Trial 2 summary Hyperparameters: learning_rate: 0.001439279420422431 momentum: 0.9 num_filters_1: 48 num_filters_2: 128 num_filters_3: 192 dense_units: 128 dropout_rate: 0.4 activation: leaky_relu regularizer: l1_l2 reg_value: 2.3325852334238444e-05 Score: 0.9620000123977661 Trial 0 summary Hyperparameters: learning_rate: 0.0023797879862497576 momentum: 0.6 num_filters_1: 32 num_filters_2: 128 num_filters_3: 128 dense_units: 512 dropout_rate: 0.1 activation: relu regularizer: l1_l2 reg_value: 1.2249641402088916e-05 Score: 0.9340000152587891 Trial 3 summary Hyperparameters: learning_rate: 0.0014452530326223845 momentum: 0.5 num_filters_1: 64 num_filters_2: 64 num_filters_3: 64 dense_units: 128 dropout_rate: 0.2 activation: elu regularizer: l2 reg_value: 1.1227880815579793e-05 Score: 0.9053333401679993
IMAGE SIZE : 128 X 128 PX
VGGTuner128 = kt.RandomSearch(TuneVGGModel(input_shape=X_train_128[0].shape, num_classes=NUM_CLASS, X_train=X_train_128), objective="val_accuracy", overwrite=True, project_name="cnn_vgg_128", max_trials=5)
VGGTuner128.search(X_train_128, y_train_128, validation_data=(X_val_128, y_val_128), epochs=50, batch_size=32, callbacks=[EarlyStopping(monitor="val_accuracy", patience=10, restore_best_weights=True)])
Trial 5 Complete [00h 12m 58s] val_accuracy: 0.9940000176429749 Best val_accuracy So Far: 0.9940000176429749 Total elapsed time: 01h 00m 42s
VGGTuner128.results_summary()
Results summary Results in .\cnn_vgg_128 Showing 10 best trials Objective(name="val_accuracy", direction="max") Trial 0 summary Hyperparameters: learning_rate: 0.029912317267680684 momentum: 0.5 num_filters_1: 48 num_filters_2: 128 num_filters_3: 192 dense_units: 512 dropout_rate: 0.30000000000000004 activation: elu regularizer: l1 reg_value: 0.00013046810082496503 Score: 0.9940000176429749 Trial 3 summary Hyperparameters: learning_rate: 0.01296346698899048 momentum: 0.8 num_filters_1: 32 num_filters_2: 96 num_filters_3: 192 dense_units: 128 dropout_rate: 0.5 activation: relu regularizer: none reg_value: 2.1927078546977857e-05 Score: 0.9940000176429749 Trial 4 summary Hyperparameters: learning_rate: 0.06155357671731701 momentum: 0.9 num_filters_1: 16 num_filters_2: 64 num_filters_3: 192 dense_units: 256 dropout_rate: 0.5 activation: leaky_relu regularizer: l2 reg_value: 0.0001787365734000676 Score: 0.9940000176429749 Trial 2 summary Hyperparameters: learning_rate: 0.05688619209704664 momentum: 0.6 num_filters_1: 16 num_filters_2: 128 num_filters_3: 128 dense_units: 128 dropout_rate: 0.4 activation: elu regularizer: l1_l2 reg_value: 0.0006299187916800518 Score: 0.9916666746139526 Trial 1 summary Hyperparameters: learning_rate: 0.001730771638349718 momentum: 0.5 num_filters_1: 32 num_filters_2: 96 num_filters_3: 128 dense_units: 256 dropout_rate: 0.5 activation: elu regularizer: none reg_value: 0.00042425838258333036 Score: 0.9836666584014893
vgg_model31 and for 128 x 128 px, we will save it as vgg_model_128.# Saving the Best Models
# 31 x 31 px
vgg_model31 = VGGTuner31.get_best_models()[0]
vgg_model31.save('CNNModels/customVGG31.h5')
vgg_model31.save_weights('CNNModels/customVGG31_Weights.h5')
# 128 x 128 px
vgg_model128 = VGGTuner128.get_best_models()[0]
vgg_model128.save('CNNModels/customVGG128.h5')
vgg_model128.save_weights('CNNModels/customVGG128_Weights.h5')
.load_model to load the final models.IMAGE SIZE : 31 X 31 PX
tf.get_logger().setLevel("ERROR")
vgg31 = tf.keras.models.load_model('CNNModels/customVGG31.h5')
vgg31.summary()
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 31, 31, 1)] 0
sequential (Sequential) (None, 15, 15, 32) 9824
sequential_1 (Sequential) (None, 7, 7, 32) 18752
sequential_2 (Sequential) (None, 3, 3, 128) 185600
global_average_pooling2d (G (None, 128) 0
lobalAveragePooling2D)
dense (Dense) (None, 256) 33024
dropout (Dropout) (None, 256) 0
dense_1 (Dense) (None, 15) 3855
=================================================================
Total params: 251,055
Trainable params: 250,287
Non-trainable params: 768
_________________________________________________________________
IMAGE SIZE : 128 X 128 PX
tf.get_logger().setLevel("ERROR")
vgg128 = tf.keras.models.load_model('CNNModels/customVGG128.h5')
vgg128.summary()
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 128, 128, 1)] 0
sequential (Sequential) (None, 64, 64, 48) 21648
sequential_1 (Sequential) (None, 32, 32, 128) 204032
sequential_2 (Sequential) (None, 16, 16, 192) 554880
global_average_pooling2d (G (None, 192) 0
lobalAveragePooling2D)
dense (Dense) (None, 512) 98816
dropout (Dropout) (None, 512) 0
dense_1 (Dense) (None, 15) 7695
=================================================================
Total params: 887,071
Trainable params: 885,599
Non-trainable params: 1,472
_________________________________________________________________
IMAGE SIZE : 31 X 31 PX
The classification model demonstrates a high overall accuracy of 96%, indicating strong performance across all classes in a balanced test set of 3000 samples (200 per class).
Key Points:
Precision: The model achieves high precision, with most classes above 90%. The class Bottle Gourd stands out with the highest precision at 99%.
Recall: Recall is also high for all classes. Bottle Gourd achieves perfect recall at 100%, with Broccoli having the lowest recall at 92%.
F1-Score: F1-scores are consistently high, reflecting a balanced precision and recall. All classes have F1-scores above 90%, with Bottle Gourd again being the highest.
Class Balance: Each class in the test set has an equal number of samples (support = 200), indicating a balanced dataset which allows for a fair comparison across classes.
Averages: Both macro and weighted averages for F1-score are at 96%, underscoring uniform performance across classes due to the balanced class distribution.
Observations:
The class Bottle Gourd is the best-performing category, with the highest metrics across the board.
The classes Broccoli and Cauliflower have relatively lower F1-scores, which suggests that these could be areas for potential improvement.
Despite the high performance, there may be opportunities to incrementally improve the model, particularly for classes with lower recall, through data augmentation, hyperparameter tuning, or additional data collection.
Overall, the model exhibits excellent classification capabilities, with robustness in identifying various vegetable classes in a balanced dataset.
# Evaluate the VGG31 model on the test dataset and obtain performance metrics
vgg31.evaluate(X_test_31, y_test_31)
# Predict class labels for the test dataset using the VGG31 model
y_pred_31 = vgg31.predict(X_test_31)
# Generate a classification report to assess the model's performance
report31 = classification_report(
np.argmax(y_test_31, axis=1), np.argmax(y_pred_31, axis=1), target_names=class_labels.values()
)
# Print the classification report to analyze the model's performance
print(report31)
94/94 [==============================] - 1s 4ms/step - loss: 0.1888 - accuracy: 0.9643
94/94 [==============================] - 0s 3ms/step
precision recall f1-score support
Bean 0.97 0.97 0.97 200
Bitter_Gourd 0.96 0.96 0.96 200
Bottle_Gourd 0.99 1.00 0.99 200
Brinjal 0.95 0.99 0.97 200
Broccoli 0.92 0.98 0.95 200
Cabbage 0.97 0.94 0.96 200
Capsicum 0.97 0.98 0.98 200
Carrot 0.97 0.97 0.97 200
Cauliflower 0.95 0.92 0.93 200
Cucumber 0.97 0.97 0.97 200
Papaya 0.98 0.99 0.98 200
Potato 0.97 0.95 0.96 200
Pumpkin 0.96 0.95 0.96 200
Radish 0.98 0.94 0.96 200
Tomato 0.96 0.94 0.95 200
accuracy 0.96 3000
macro avg 0.96 0.96 0.96 3000
weighted avg 0.96 0.96 0.96 3000
PLOTTING AND ANALYZING THE CONFUSION MATRIX FOR 31 X 31 PX IMAGES
High Accuracy: Most classes have high correct classification rates, with Bottle_Gourd achieving perfect accuracy.
Misclassifications:
Cauliflower and Radish, have higher misclassification rates, suggesting the model is less certain about these classes.Cauliflower exhibits the most confusion, often misclassified as other vegetables and vice versa.Error Patterns:
Pumpkin, Cabbage, and Potato are commonly confused with each other, possibly due to similar characteristics.Areas for Improvement:
Cauliflower and Radish.Overall Performance: Despite some misclassifications, the model performs well across the majority of the classes.
# Confusion Matrix of Results for 31 x 31 px
# Creating a figure for the plot
plt.figure(1, figsize=(10, 10))
# Setting the title for the confusion matrix
plt.title("Confusion Matrix")
# Calculating the confusion matrix using TensorFlow
conf_matrix = tf.math.confusion_matrix(
np.argmax(y_test_31, axis=1), # True labels
np.argmax(y_pred_31, axis=1), # Predicted labels
num_classes=NUM_CLASS, # Number of classes
dtype=tf.dtypes.int32 # Data type of the matrix
)
# Plotting the confusion matrix using seaborn heatmap
sns.heatmap(conf_matrix, annot=True, fmt="", cbar=False, cmap="OrRd", yticklabels=class_labels.values(), xticklabels=class_labels.values())
# Setting the labels for the y-axis and x-axis
plt.ylabel("True Label")
plt.xlabel("Predicted Label")
# Displaying the plot
plt.show()
IMAGE SIZE : 128 X 128 PX
The classification model achieves a remarkable overall accuracy of 99%, signifying excellent performance across all classes within a balanced test set comprising 3000 samples, with 200 instances per class.
Key Points:
Precision: The model's precision is outstanding, with nearly all classes achieving above 99%. Notably, Bottle Gourd achieves a perfect precision of 100%.
Recall: All classes show high recall rates, with Bottle Gourd again at 100%. The lowest recall rates are for Broccoli and Cabbage, at 98%, which are still notably high.
F1-Score: The F1-scores are uniformly high across all classes, reflecting a strong balance between precision and recall, with scores at or above 99%.
Class Balance: The dataset is well-balanced, with each class represented equally in the test set, facilitating a fair evaluation across all categories.
Averages: The macro and weighted average F1-scores are at an impressive 99%, indicating consistent model performance across all classes.
Observations:
Bottle Gourd stands out as the best-performing class, with the highest scores in both precision and recall metrics.
While the overall model performance is exemplary, there is room for incremental improvement in classes such as Broccoli, Cabbage and Radish, which have slightly lower recall.
Opportunities for model refinement could include techniques such as data augmentation, hyperparameter optimization, or collecting more diverse data samples, especially for classes that show marginally lower performance.
Overall, the model demonstrates a robust capability to accurately classify various vegetable classes within a balanced dataset.
# Evaluate the VGG128 model on the test dataset and calculate its performance metrics
vgg128.evaluate(X_test_128, y_test_128)
# Generate class predictions for the test dataset using the VGG128 model
y_pred_128 = vgg128.predict(X_test_128)
# Create a classification report to assess the model's performance
report128 = classification_report(
np.argmax(y_test_128, axis=1), np.argmax(y_pred_128, axis=1), target_names=class_labels.values()
)
# Print the classification report to analyze and display the model's performance
print(report128)
94/94 [==============================] - 1s 11ms/step - loss: 0.0223 - accuracy: 0.9933
94/94 [==============================] - 1s 9ms/step
precision recall f1-score support
Bean 0.99 0.99 0.99 200
Bitter_Gourd 0.99 0.99 0.99 200
Bottle_Gourd 1.00 1.00 1.00 200
Brinjal 0.99 0.99 0.99 200
Broccoli 1.00 0.98 0.99 200
Cabbage 1.00 0.98 0.99 200
Capsicum 1.00 0.99 1.00 200
Carrot 0.99 1.00 1.00 200
Cauliflower 0.99 0.99 0.99 200
Cucumber 1.00 0.99 1.00 200
Papaya 1.00 1.00 1.00 200
Potato 1.00 0.99 0.99 200
Pumpkin 0.98 0.99 0.99 200
Radish 0.99 0.98 0.99 200
Tomato 0.99 1.00 0.99 200
accuracy 0.99 3000
macro avg 0.99 0.99 0.99 3000
weighted avg 0.99 0.99 0.99 3000
PLOTTING AND ANALYZING THE CONFUSION MATRIX FOR 128 X 128 PX IMAGES
High Accuracy: The model shows high accuracy in classifying most vegetables. Notably, Bottle_Gourd is classified with 100% accuracy.
Misclassifications:
Broccoli and Potato have slightly higher misclassification rates, indicating areas where the model is less confident.Broccoli has been misclassified as Bean and Cauliflower which may suggest similarity in features used by the model for these classes.Error Patterns:
Potato has been occasionally confused with Tomato and Radish, which could be due to shared characteristics between these classes that the model finds difficult to distinguish.Areas for Improvement:
Broccoli and Potato by enhancing the model's feature extraction capabilities or by providing more diverse training samples for these classes.Overall Performance: The model achieves a commendable performance for the majority of the classes, with room for incremental improvements in classifying Broccoli and Potato more accurately.
# Confusion Matrix of Results for 128 x 128 px
# Creating a figure for the plot
plt.figure(1, figsize=(10, 10))
# Setting the title for the confusion matrix
plt.title("Confusion Matrix")
# Calculating the confusion matrix using TensorFlow
conf_matrix = tf.math.confusion_matrix(
np.argmax(y_test_128, axis=1), # True labels
np.argmax(y_pred_128, axis=1), # Predicted labels
num_classes=NUM_CLASS, # Number of classes
dtype=tf.dtypes.int32 # Data type of the matrix
)
# Plotting the confusion matrix using seaborn heatmap
sns.heatmap(conf_matrix, annot=True, fmt="", cbar=False, cmap="OrRd", yticklabels=class_labels.values(), xticklabels=class_labels.values())
# Setting the labels for the y-axis and x-axis
plt.ylabel("True Label")
plt.xlabel("Predicted Label")
# Displaying the plot
plt.show()
IMAGE SIZE : 31 X 31 PX
# Identify the misclassified samples
wrong = np.argmax(y_test_31, axis=1) != np.argmax(y_pred_31, axis=1)
X_test_wrong_31 = X_test_31[wrong]
y_test_wrong_31 = np.argmax(y_test_31[wrong], axis=1)
y_pred_wrong_31 = np.argmax(y_pred_31[wrong], axis=1)
# Determine the number of misclassified samples
num_misclassified = len(X_test_wrong_31)
# Set up the subplot dimensions
n_rows = int(np.ceil(np.sqrt(num_misclassified)))
n_cols = n_rows if num_misclassified > n_rows * (n_rows - 1) else n_rows - 1
# Set up the plot
fig, ax = plt.subplots(n_rows, n_cols, figsize=(20, 20))
ax = ax.ravel() # Flatten the array for easy iteration
# Iterate over each subplot
for i in range(n_rows * n_cols):
if i < num_misclassified:
# Display the image
ax[i].imshow(X_test_wrong_31[i], cmap='gray') # Use cmap='gray' for grayscale images
ax[i].axis("off") # Hide axes
# Get predictions and actual labels
pred_label = class_labels[y_pred_wrong_31[i]]
actual_label = class_labels[y_test_wrong_31[i]]
# Add title with actual and predicted labels
ax[i].set_title(f"Label: {actual_label}\nPredicted: {pred_label}")
else:
ax[i].axis("off") # Turn off axis for empty subplots
plt.tight_layout()
plt.show()
IMAGE SIZE : 128 X 128 PX
# Identify the misclassified samples
wrong = np.argmax(y_test_128, axis=1) != np.argmax(y_pred_128, axis=1)
X_test_wrong_128 = X_test_128[wrong]
y_test_wrong_128 = np.argmax(y_test_128[wrong], axis=1)
y_pred_wrong_128 = np.argmax(y_pred_128[wrong], axis=1)
# Determine the number of misclassified samples
num_misclassified = len(X_test_wrong_128)
# Set up the subplot dimensions
n_rows = int(np.ceil(np.sqrt(num_misclassified)))
n_cols = n_rows if num_misclassified > n_rows * (n_rows - 1) else n_rows - 1
# Set up the plot
fig, ax = plt.subplots(n_rows, n_cols, figsize=(20, 20))
ax = ax.ravel() # Flatten the array for easy iteration
# Iterate over each subplot
for i in range(n_rows * n_cols):
if i < num_misclassified:
# Display the image
ax[i].imshow(X_test_wrong_128[i], cmap='gray') # Use cmap='gray' for grayscale images
ax[i].axis("off") # Hide axes
# Get predictions and actual labels
pred_label = class_labels[y_pred_wrong_128[i]]
actual_label = class_labels[y_test_wrong_128[i]]
# Add title with actual and predicted labels
ax[i].set_title(f"Label: {actual_label}\nPredicted: {pred_label}")
else:
ax[i].axis("off") # Turn off axis for empty subplots
plt.tight_layout()
plt.show()
For 31 PX x 31 PX:
Based on the error analysis, our model for 31 x 31 px predicted 107 images incorrectly when using the test set (3000 images), which is about 3% of the test set. This shows our model performed really well at predicting most unseen data and managed to correctly classify 96% of them. Upon deeper analysis, it is understandable for the model to make errors on some of the images, such as misclassifying cauliflower as cabbage and brinjal as broccoli as these images could be mistaken due to the low pixel value, and the model may make a few errors. However, the model still managed to classify most of the vegetables correctly, indicating a high level of precision and accuracy.
For 128 PX x 128 PX:
Based on the error analysis, our model for 128 x 128 px predicted 20 images incorrectly when using the test set (3000 images), which is about 0.6% of the test set. This shows our model for 128px images performed extremely well at predicting most unseen data and managed to correctly classify 99.4% of them. For the misclassifications, the model perhaps may not have generalized well enough to certain vegetable images at certain angles / lightings due to a limited training set available. However, the model managed to classify almost all vegetables correctly, indicating a high level of accuracy and precision.
keras.utils to get an overview of the layers involved in the models.IMAGE SIZE : 31 X 31 PX
tf.keras.utils.plot_model(vgg31, show_shapes = True, expand_nested = True, show_layer_activations = True)
IMAGE SIZE : 128 X 128 PX
tf.keras.utils.plot_model(vgg128, show_shapes = True, expand_nested = True, show_layer_activations = True)
Best Model For 31 x 31 px & 128 x 128 px : CustomVGG Model
For this CNN analysis, I tested with custom-built models, and did extensive research on existing architectures, and proceeded to modify and adapt the layers for this dataset. Overall, I learnt how to create custom-built models, gained a deeper understanding on convolutional neural networks and how to perform image classification.
Dataset Overview: The dataset comprised images for 15 classes of vegetables, with preprocessing steps like resampling, one-hot encoding and data augmentation.
Training Process: Training involved SGD opimizer with a learning rate of 0.001, batch size of 32, and 50 epochs. Techniques like L2 Regularization, Early Stopping, and Dropout were employed to enhance model performance.
Comparative Analysis: Compared to the other models tested, CustomVGG showed superior performance for both grayscaled image sizes tested.
For further refinement in future, I can further enhance the model's performance by testing with more architectures and implementing other hyperparameter tuning variables. However, judging from the current level of accuracy provided by the model, it can be concluded that the model has generalized well to unseen data and is able to predict new data to a high degree of correctness and precision.
Final Results Obtained on Test Set :
| Model | Image Size | Test Accuracy |
|---|---|---|
| CustomVGG Model | 31 x 31 x 1 | 96 % |
| CustomVGG Model | 128 x 128 x 1 | 99 % |